Keywords

6.1 Introduction

All humans enjoy the right to life, liberty and security of the person. The right to life is also included as a core right in 77% of the world’s constitutions (UN 2018), is the cornerstone of other rights and is enshrined in international human rights instruments (Table 6.1).

Table 6.1 Right to life in international human rights instruments

State parties who are signatories to the human rights instruments enshrining the right to life have a duty to take necessary measures to ensure individuals are protected from its violation: i.e. its loss, deprivation or removal.

Artificial intelligence (AI) can support an individual’s enjoyment of life, liberty and security by, for example, supporting the diagnosis and treatment of medical conditions. Raso et al. (2018) outline how criminal justice risk assessment tools could benefit low-risk individuals through increased pre-trial releases and shorter sentences. Reports suggest that AI tools could help identify and mitigate human security risks and lower crime rates (Deloitte n.d., Muggah 2017).

AI can have adverse effects on human life, liberty and security in a variety of ways (Vasic and Billard 2013; Leslie 2019), as elaborated in this chapter. Human rights issues around life, liberty and security of persons are particularly serious, and risks from the use of AI need to be weighed up against the risks incurred when not using AI, in comparison with other innovations. AI systems identified as high-risk (European Commission 2021) include those used in critical infrastructure (e.g. transportation) that could put the life and health of people at risk; in educational or vocational training that determine access to education and the professional course of someone’s life (e.g. the scoring of exams); in the safety components of products (e.g. AI applications in robot-assisted surgery); in employment, the management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures); in essential private and public services (e.g. when credit scoring denies citizens the opportunity to obtain a loan); in law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence); in migration, asylum and border control management (e.g. verification of the authenticity of travel documents); and in the administration of justice and democratic processes (e.g. applying the law to a concrete set of facts). These categories of high-risk AI systems have the potential to impact the right to life, liberty and security (some in more direct ways than others, but nonetheless relevant).

Life-threatening issues have been raised regarding the use of robot-assisted medical procedures and ‌robotics systems in surgery (Alemzadeh et al. 2016), robot accidents and ‌malfunctions in manufacturing, law enforcement (Boyd 2016), retail and entertainment settings (Jiang and Gainer 1987), security vulnerabilities in smart home hubs (Fránik and Čermák 2020), self-driving and autonomous vehicles (AP and Reuters 2021), and lethal attacks by AI-armed drone swarms and autonomous weapons (Safi 2019). We look at three different cases affecting human life, liberty and security, one in the transportation context (self-driving cars), one related to the home (smart home security), and one in the healthcare service setting (adversarial attacks).

6.2 Cases of AI Adversely Affecting the Right to Life, Liberty and Security of Persons

6.2.1 Case 1: Fatal Crash Involving a Self-driving Car

In May 2016, a Tesla car was the first known self-driving car to be involved in a fatal crash. The 42-year-old passenger/‌driver died instantly after colliding with a tractor-trailer. The tractor driver was not injured. “According to Tesla’s account of the crash, the car’s sensor system, against a bright spring sky, failed to distinguish a large white 18-wheel truck and trailer crossing the highway” (Levin and Woolf 2016). An examination by the Florida Highway Patrol concluded that the Tesla driver had not been attentive and had failed to take evasive action. At the same time, the tractor driver had failed, during a left turn, to give right of way, according to the report. (Golson 2017)

In this case, the driver had put his car into Tesla’s autopilot mode, which was able to control the car. According to Tesla, its autopilot is “an advanced driver assistance system that enhances safety and convenience behind the wheel” and, “[w]hen used properly” is meant to reduce a driver’s “overall workload” (Tesla n.d.). While Tesla clarified that the underlying autonomous software was designed to nudge consumers to keep their hands on the wheels to make sure they were paying attention, that does not seem to have happened in this case and resulted in a fatality. According to Tesla, “the currently enabled Autopilot and Full Self-Driving features require active driver supervision and do not make the vehicle autonomous” (ibid).

In 2018, an Uber test driver in charge of monitoring one of the company’s self-driving cars was charged with negligent homicide when it hit and killed a pedestrian. An investigation by the National Transportation Safety Board (NSTB) concluded that the crash had been caused by the Uber test driver being distracted by her phone and implicated Uber’s inadequate safety culture (McFarland 2019). The NSTB also found that Uber’s system could not correctly classify and predict the path of a pedestrian crossing midblock.

In 2021, two men were killed in Texas after the Tesla vehicle they were in, which was going at a high speed, went off the road and hit a tree. The news report also mentioned that the men been discussing the autopilot feature before they drove off (Pietsch 2021). Evidence is believed to show that no one was driving the vehicle when it crashed.

While drivers seem to expect self-driving cars, as marketed to them, to give them more independence and freedom, self-driving cars are not yet, as stated by Tesla, for example, “autonomous”. The autopilot function and the “Full Self-Driving” capability are intended for use with a fully attentive driver with hands on the wheel and ready to take over at any moment.

While some research (Kalra and Groves 2017; Teoh and Kidd 2017) seems to suggest that self-driving cars may be safer than those driven by the average human driver, the main case and the further examples cited here point to human safety challenges from different angles: the safety of the drivers, passengers and other road users (e.g. cyclists, pedestrians and animals) and objects that encounter self-driving cars.

Other standard issues raised about self-driving cars, as outlined by Jansen et al. (2020), relate to security (the potential for their hacking leading to the compromising of personal and sensitive data) and responsibility, that is, where does responsibility for harms caused lie: with the manufacturer, the system programmer or software engineer, the driver/‌passenger, or the insurers? A responsibility gap could also occur, as pointed out by the Council of Europe’s Committee on Legal Affairs and Human Rights, “where the human in the vehicle—the ‘user-in-charge’, even if not actually engaged in driving—cannot be held liable for criminal acts and the vehicle itself was operating according to the manufacturer’s design and applicable regulations.” (Council of Europe 2020). There is also the challenge of shared driving responsibilities between the human driver and the system (BBC News 2020).

The underlying causes that require addressing in these cases include software/‌system vulnerabilities, inadequate safety risk assessment procedures and oversight of vehicle operators, as well as human error and driver distractions (including a false sense of security) (Clifford Law 2021).

6.2.2 Case 2: Smart Home Hubs Security Vulnerabilities

A smart home hub is a control centre for home automation systems, such as those operating the heating, blinds, lights and internet-enabled electronic appliances. Such systems allow the user to interact remotely with the hub using, for instance, a smartphone. A user who is equipped to activate appliances remotely can arrive at home with the networked gas fire burning and supper ready in the networked oven. However, it is not only the users themselves who can access their smart home hubs, but also external entities, if there are security vulnerabilities, as was the case for three companies operating across Europe. (Fránik and Čermák 2020)

Smart home security vulnerabilities directly affect all aspects of the right to life, liberty and security of the person. E.g., Man-in-the-middle attacks that interrupt or spoof communication between smart home devices and denial-of-service attacks could disrupt or shut devices down and compromise user well-being, safety and security.

Such vulnerabilities and attacks exploiting them can threaten a home, together with the peaceful enjoyment of life and human health within it. Unauthorised access could also result in threats to human life and health. For example, as outlined in a report from the European Union Agency for Cybersecurity (ENISA), safety might be compromised and human life thus endangered by the breach, or loss of control, of a thermostat, a smoke detector, a CO2 detector or smart locks (Lévy-Bencheton et al. 2015).

When smart home security is exposed to vulnerabilities and threats, these can facilitate criminal actions and intrusions, or could themselves be a form of crime (e.g. physical damage, theft or unauthorised access to smart home assets) (Barnard-Wills et al. 2014).

While there are many other ethical issues that concern smart homes (e.g. access, autonomy, freedom of association, freedom of movement, human touch, informed consent, usability), this case study also further underlines two critical issues connected to the right to life: security and privacy (Marikyanet al. 2019; Chang et al. 2021). Hackers could spy on people, get access to very personal information and misuse smart-home-connected devices in a harmful manner (Laughlin 2021). Nefarious uses could include the perpetration of identity theft, location tracking, home intrusions and access lock-outs.

The responsibilities for ensuring that smart home devices and services do not suffer from vulnerabilities or attacks are manifold, and lie largely with the manufacturers and service providers, and with users. Users of smart-home-connected devices must carry out their due diligence when purchasing smart devices (by buying from reputable companies with good security track records and ensuring that security is up to the task).

6.2.3 Case 3: Adversarial Attacks in Medical Diagnosis

Medical diagnosis, particularly in radiology, often relies on images. Adversarial attacks on medical image analysis systems are a problem (Bortsova et al. 2021) that can put lives at risk. This applies whether the AI system is tasked with the medical diagnosis or whether the task falls to radiologists, as an experiment with mammogram images has shown. Zhou et al. used a generative adversarial network (GAN) model to make intentional modifications to radiology images taken to detect breast cancer (Zhou et al. 2021). The resulting fake images were then analysed by an AI model and by radiologists. The adversarial samples “fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29–71% of the adversarial samples” (ibid). In both cases, a wrong cancer diagnosis could lead to risks to health and life.

Adversarial attacks are “advanced techniques to subvert otherwise-reliable machine-learning systems” (Finlayson et al. 2019). These techniques, for example by making tiny image manipulations (adversarial noise) to images that might help confirm a diagnosis, guarantee positive trial results or control the rates of medical interventions to the advantage of those carrying out such attacks (Finlayson et al. 2018).

To raise awareness of adversarial attacks, Rahman et al. (2021) tested COVID-19 deep learning applications and found that they were vulnerable to adversarial example attacks. They report that due to the wide availability of COVID-19 data sets, and because some data sets included both COVID-19 patients’ public data and their attributes, they could poison data and launch classified inference attacks. They were able to inject fake audio, images and other types of media into the training data set. Based on this, Rahman et al. (2021) call for further research and the use of appropriate defence mechanisms and safeguards.

The case study and examples mentioned in this section expose the problem of machine and deep learning application vulnerabilities in the healthcare setting. They show that a lack of appropriate defence mechanisms, safeguards and controls would cause serious harm by changing results to detrimental effect.

6.3 Ethical Questions

All the case studies raise several ethical issues. Here we discuss some of the core ones.

6.3.1 Human Safety

To safeguard human safety, which has come to the fore in all three case studies, unwanted harms, risks and vulnerabilities to attack need to be addressed, prevented and eliminated throughout the life cycle of the AI product or service (UNESCO 2021). Human safety is rooted in the value of human life and wellbeing. Safety requires that AI systems and applications should not cause harm through misuse, questionable or defective design and unintended negative consequences. Safety, in the context of AI systems, is connected to ensuring their accuracy, reliability, security and robustness (Leslie 2019). Accuracy refers to the ability of an AI system to make correct judgements, predictions, recommendations or decisions based on data or models (AI HLEG 2019). Inaccurate AI predictions may result in serious and adverse effects on human life. Reliability refers to the ability of a system to work properly using different inputs and in a range of situations, a feature that is deemed critical for both scrutiny and harms prevention (ibid). Security calls for protective measures against vulnerabilities, exploitation and attacks at all levels: data, models, hardware and software (ibid). Robustness requires that AI systems use a preventative approach to risk. The systems should behave reliably while minimising unintentional and unexpected harm and preventing unacceptable harm, and at the same time ensuring the physical and mental integrity of humans (ibid).

6.3.2 Privacy

As another responsible AI principle, privacy (see Chap. 3) is also particularly implicated in the first and second case studies. Privacy, while an ethical principle and human right in itself, intersects with the right to life, liberty and security, and supports it with protective mechanisms in the technological context. This principle, in the AI context, includes respect for the privacy, quality and integrity of data, and access to data (AI HLEG 2019). Privacy vulnerabilities manifest themselves in data leakages which are often used in attacks (Denko 2017). Encryption by itself is not seen to provide “adequate privacy protection” (Apthorpe et al. 2017). AI systems must have appropriate levels of security to prevent unauthorised or unlawful processing, accidental loss, destruction or damage (ICO 2020). They must also ensure that privacy and data protection are safeguarded throughout the system’s lifecycle, and data access protocols must be in place (AI HLEG 2019). Furthermore, the quality and integrity of data are critical, and processes and data sets used require testing at all stages.

6.3.3 Responsibility and Accountability

When anything goes wrong, we look for who is responsible for making decisions about liability and accountability. Responsibility is seen in terms of ownership and/or answerability. In the cases examined here, responsibility might lie with different entities, depending on their role and/or culpability in the harms caused. The cases furthermore suggest that the allocation of responsibility may not be simple or straightforward. In the case of an intentional attack on an AI system, it may be possible to identify the individual orchestrating it. However, in the case of the autonomous vehicle or that of the smart home, the combination of many contributions and the dynamic nature of the system may render the attempt to attribute the actions of the system difficult, if not impossible.

Responsibility lies not only at the point of harm but goes to the point of inception of an AI system. As the ethics guidelines of the European Commission’s High-Level Expert Group on Artificial Intelligence outline (AI HLEG 2019), companies must identify the impacts of their AI systems and take steps to mitigate adverse impacts. They must also comply with technical requirements and legal obligations. Where a provider (a natural or legal person) puts a high-risk AI system on the market or into service, they bear the responsibility for it, whether or not they designed or developed it (European Commission 2021).

Responsibility faces many challenges in the socio-technical and AI context (Council of Europe 2019). The first, the challenge of “many hands” (Van de Poel et al. 2012) results as the “development and operation of AI systems typically entails contributions from multiple individuals, organisations, machine components, software algorithms and human users, often in complex and dynamic environments”. (Council of Europe 2019). A second challenge relates to how humans placed in the loop are made responsible for harms, despite having only partial control of an AI system, in an attempt by other connected entities to shirk responsibility and liability. A third challenge highlighted is the unpredictable nature of interactions between multiple algorithmic systems that generate novel and potentially catastrophic risks which are difficult to understand (Council of Europe 2019).

For now, responsibility for acts and omissions in relation to an AI product or service and system-related harms lies with humans. The Montreal Declaration for a Responsible Development of AI (2018) states that the development and use of AI “must not contribute to lessening the responsibility of human beings when decisions must be made”. However, it also provides that “when damage or harm has been inflicted by an AIS [AI system], and the AIS is proven to be reliable and to have been used as intended, it is not reasonable to place blame on the people involved in its development or use”.

Accountability, as outlined by the OECD, refers to.

the expectation that organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process (for example, by providing documentation on key decisions throughout the AI system lifecycle or conducting or allowing auditing where justified). (OECD n.d.)

Accountability, in the AI context, is linked to auditability (assessment of algorithms, data and design processes), minimisation and reporting of negative impacts, addressing trade-offs and conflicts in a rational and methodological manner within the state of the art, and having accessible redress mechanisms (AI HLEG 2019).

But accountability in the AI context is also not without its challenges, as Busuioc (2021) explains. Algorithm use creates deficits that affect accountability: the compounding of informational problems, the absence of adequate explanation or justification of algorithm functioning (limits on questioning this), and ensuing difficulties with diagnosing failure and securing redress. Various regulatory tools have thus become important to boost AI accountability.

6.4 Responses

Given the above issues and concerns, it is important to put considerable effort into preventing AI human rights issues arising around life, liberty and security of persons, for which the following tools will be particularly helpful.

6.4.1 Defining and Strengthening Liability Regimes

An effective liability regime offers incentives that help reduce risks of harm and provide means to compensate the victims of such harms. “Liability” may be defined by contractual requirements, fault or negligence-based liability, or no-fault or strict liability. With regard to self-driving cars, liability might arise from tort for drivers and insurers and from product liability for manufacturers. Different approaches are adopted to reduce risks depending on the type of product or service.

Are current liability regimes adequate for AI? As of 1 April 2022, there were no AI-specific legal liability regimes in the European Union or United States, though there have been some attempts to define and strengthen existing liability regimes to take into account harms from AI (Karner et al. 2021).

The European Parliament’s resolution of 20 October 2020 with recommendations to the European Commission on a civil liability regime for AI (European Parliament 2020) outlined that there was no need for a complete revision of the well-functioning liability regimes in the European Union. However, the capacity for self-learning, the potential autonomy of AI systems and the multitude of actors involved presented a significant challenge to the effectiveness of European Union and national liability framework provisions. The European Parliament recognised that specific and coordinated adjustments to the liability regimes were necessary to compensate persons who suffered harm or property damage, but did not favour giving legal personality to AI systems. It stated that while physical or virtual activities, devices or processes that were driven by AI systems might technically be the direct or indirect cause of harm or damage, this was nearly always the result of someone building, deploying or interfering with the systems (European Parliament 2020). Parliament recognised, though, that the Product Liability Directive (PLD), while applicable to civil liability claims relating to defective AI systems, should be revised (along with an update of the Product Safety Directive) to adapt it to the digital world and address the challenges posed by emerging digital technologies. This would ensure a high level of effective consumer protection and legal certainty for consumers and businesses and minimise high costs and risks for small and medium-sized enterprises and start-ups. The European Commission is taking steps to revise sectoral product legislation (Ragonnaud 2022; Šajn 2022) and undertake initiatives that address liability issues related to new technologies, including AI systems.

A comparative law study on civil liability for artificial intelligence (Karner et al. 2021) questioned whether the liability regimes in European Union Member States provide for an adequate distribution of all risks, and whether victims will be indemnified or remain undercompensated if harmed by the operation of AI technology, even though tort law principles would favour remedying the harm. The study also highlights that there are some strict liabilities in place in all European jurisdictions, but that many AI systems would not fall under such risk-based regimes, leaving victims to pursue compensation via fault liability.

With particular respect to self-driving vehicles, existing legal liability frameworks are being reviewed and new measures have been or are being proposed (e.g. Automated and Electric Vehicles Act 2018; Dentons 2021). These will need to deal with issues that arise from the shifts of control from humans to automated driver assistance systems, and to address conflicts of interest, responsibility gaps (who is responsible and in what conditions, i.e. the human driver/‌passengers, system operator, insurer or manufacturer) and the remedies applicable.

A mixture of approaches is required to address harms by AI, as different liability approaches serve different purposes: these could include fault- or negligence-based liability, strict liability and contractual liability. The strengthening of provisions for strict liability (liability that arises irrespective of fault or of a defect, malperformance or non-compliance with the law) is highly recommended for high-risk AI products and services (New Technologies Formation 2019), especially where such products and services may cause serious and/or significant and frequent harms, e.g. death, personal injury, financial loss or social unrest (Wendehorst 2020).

6.4.2 Quality Management for AI Systems

Given the risks shown in the case studies presented, it is critical that AI system providers have a good quality management system in place. As outlined in detail in the proposal for the Artificial Intelligence Act (European Commission 2021), this should cover the following aspects:

  1. 1.

    a strategy for regulatory compliance …

  2. 2.

    techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;

  3. 3.

    techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;

  4. 4.

    examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out,

  5. 5.

    technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, the means to be used to ensure that the high-risk AI system complies with the requirements set out [in this law];

  6. 6.

    systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems;

  7. 7.

    the risk management system …

  8. 8.

    the setting-up, implementation and maintenance of a post-market monitoring system …

  9. 9.

    procedures related to the reporting of serious incidents and of malfunctioning …

  10. 10.

    the handling of communication with national competent authorities, competent authorities, including sectoral ones, providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;

  11. 11.

    systems and procedures for record keeping of all relevant documentation and information,

  12. 12.

    resource management, including security of supply related measures,

  13. 13.

    an accountability framework setting out the responsibilities of the management and other staff …

6.4.3 Adversarial Robustness

Case 3 demonstrates the need to make AI models more robust to adversarial attacks. As an IBM researcher puts it, “Adversarial robustness refers to a model’s ability to resist being fooled” (Chen 2021). This calls for the adoption of various measures, such as the simulation and mitigation of new attacks, via, for example, reverse engineering to recover private data, adversarial training (Tramèr et al. 2018; Bai et al 2021, University of Pittsburgh 2021), using pre-generating adversarial images and teaching the model that these images are manipulated, and designing robust models and algorithms (Dhawale et al. 2022). The onus is clearly on developers to prepare for and anticipate AI model vulnerabilities and threats.

Examples abound of efforts to increase adversarial robustness (Gorsline et al. 2021). Li et al. (2021) have proposed an enhanced defence technique called Attention and Adversarial Logit Pairing (AT + ALP), which, when applied to clean examples and their adversarial counterparts, would help improve accuracy on adversarial examples over adversarial training. Tian et al. (2021) have proposed what they call “detect and suppress the potential outliers” (DSPO), a defence against data poisoning attacks in federated learning scenarios.

6.5 Key Insights

The right to life is the baseline of all rights: the first among other human rights. It is closely related to other human rights, including some that are discussed elsewhere in this book, such as privacy (see Chap. 3) or dignity (see Chap. 7).

In the AI context, this right requires AI developers, deployers and users to respect the sanctity of human life and embed, value and respect this principle in the design, development and use of their products and/or services. Critically, AI systems should not be programmed to kill or injure humans.

Where there is a high likelihood of harms being caused, even if accidental, additional precautions must be taken and safeguards set up to avoid them, for example the use of standards, safety-based design, adequate monitoring of the AI system (Anderson 2020), training, and improved accident investigation and reporting (Alemzadeh et al. 2016).

While the technology may have exceeded human expectations, AI must support human life, not undermine it. The sanctity of human life must be preserved. What is furthermore required is sensitivity to the value of human life, liberty and security. It is insensitivity to harms and impacts that leads to change-resistant problematic actions. Sensitivity requires the ability to understand what is needed and the taking of helpful actions to fulfil that need. It also means remembering that AI can influence, change and damage human life in many ways. This sensitivity is required at all levels: development, deployment and use. It requires continuous learning on the adverse impacts that an AI system may have on human life, liberty and security and avoiding and/or mitigating such impacts to the fullest extent possible.