Review article
Towards increased systems resilience: New challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems

https://doi.org/10.1016/j.arcontrol.2017.09.008Get rights and content

Abstract

This paper discusses concepts and tools for joint human and cyber-physical-systems analysis and control in the view of increasing the whole system resilience. More precisely, it details new challenges for human reliability based on dissonance control of Cyber-Physical&Human Systems (CPHS) to improve the system's resilience. The proposed framework relates to three main topics: the stability analysis in terms of dissonances, the dissonance identification, and the dissonance control. Dissonance oriented stability analysis in this sense consists in determining any conflicting situations resulting from the human behaviors interacting with Cyber-Physical Systems (CPS). Frames of reference support the assessment of stable or unstable gaps among stability shaping factors and the identification of dissonances. Dissonance control consists in reinforcing the frames of reference by applying reinforcement modes. It aims then at accepting or rejecting the identified dissonances by using supports such as expert judgment, feedback of experience, simulation, learning or cooperation. An example in road transportation illustrates the interest of the proposed framework by studying possible dissonances between car drivers and CPS. As automation spreads out in society by generating close interactions with humans, the ideas of the paper will support the design of new analysis and control tools jointly made by researchers from social and control sciences to study the resilience of the whole CPHS in terms of dissonances.

Introduction

Systems have to be designed not for prohibiting individual unsafe acts but for preventing human error occurrence or reducing their potential consequences by specifying adequate barriers or defenses (Reason, 2000). Human error can then be seen as a consequence of a failed defense instead of a cause of an unsafe event. However, more than 70% of accidents remain due to human errors and 100% of them are directly or indirectly linked with human factors (Amalberti, 2013). Moreover, even if a technical system such as a Speed Control System (SCS) is designed to improve the safety or the comfort of the car driver, its use can produce unsafe situations due to the reduction of the inter-distance between cars, the increasing of the reaction time or the decreasing of the human vigilance (Dufour, 2014). This paper proposes some ways to analyze such dilemma between the design and the use of a system. It extends the proposal by presenting some new challenges for assessing and controlling human reliability of Cyber-Physical&Human Systems (CPHS) where a Cyber-Physical System (CPS) or several CPS interact with human operators. It is an extension of the plenary session given by the author at the first IFAC conference on CPHS entitled “Human reliability and Cyber-Physical&Human Systems” in Brazil. Based on the author's experience and on literature reviews, several challenges are discussed and motivate a new framework proposal for human reliability study in CPHS.

Human reliability usually confronts the problem of its definition and its assessment in the course of the design, the analysis or the evaluation of CPHS such as human-machine systems, joint cognitive systems, systems of systems, socio-technical systems, multi-agent systems, manufacturing systems or cybernetic systems. Human reliability of CPHS may be defined by distinguishing two sets of frames of reference or baselines: (1) the frame related to what the human operators are supposed to do, i.e. their prescriptions, (2) the frame related to what they do outside these prescriptions. Human reliability can then be seen as the capacity of human operators to realize successfully the tasks required by their prescriptions and the additional tasks, during an interval of time or at a given time. Human error is usually considered as the negative view of human behaviors: it is the capacity of human operators not to realize correctly their required tasks or the additional tasks. Methods for analyzing human reliability exist and are well explained and discussed on published states-of-the-art (Bell and Holroyd, 2009, Hickling and Bowie, 2013, Kirwan, 1997a, Kirwan, 1997b, Pan et al., 2016, Reer, 2008, Straeter et al., 2012, Swain, 1990, Vanderhaegen, 2001, Vanderhaegen, 2010). They consider mainly the first set of tasks, i.e. they study the possible human errors related to what the users are supposed to do. Human reliability assessment methods remain unsuitable or insufficient, and new developments have to be done considering new constraints such as the dynamic evolution of a system upon the time, the variability of a human operator or between human operators, or the creativity of human operators who are capable to modify the use of a system or to invent new uses. Regarding such new requirements for human reliability study, many contributions present the concept of resilience as an important issue for organization management and for controlling criteria such as safety, security, ethics, health or survival (Engle et al., 1996, Hale and Heijer, 2006, Hollnagel, 2006, Khaitan and McCalley, 2015, Orwin and Wardle, 2004, Pillay, 2016, Ruault et al., 2013, Seery, 2011, Wreathall, 2006). Resilience is usually linked with the system stability, and it is defined as the ability or the natural mechanisms of a CPHS to adjust its functioning after disturbances or aggressions, in order to maintain its stable state, to come back to a stable state or to recover from an instable state. The more stable a system, the less uncertain the human attitudes related to beliefs and intentions (Petrocelli, Clarkson, Tormala, & Hendrix, 2010). On the other hand, other studies present the organizational stability as an obstacle for being resilient, and the instability as an advantage to survive (Holling, 1973, Holling, 1996, Lundberg and Johansson, 2006). Then, a system such as a CPHS with regular important variations that provoke its instability may survive and be resilient for a long period of time, whereas an isolated stable CPHS that does not interact with others may not be resilient when an external aggression occurs and makes it instable.

This paper proposes new challenges for the human reliability study of CPHS based on the above mentioned concept of stability applied to human behaviors. The analysis of human stability is interpreted in terms of dissonances and the successful control of these dissonances makes the CPHS resilient. The concept of dissonance is adapted from Festinger (1957) and Kervern (1994), and a dissonance is a conflict of stability. Three main topics are then discussed in Sections 2–4 respectively: the dissonance oriented stability analysis, the dissonance identification, and the dissonance control. In parallel, a case study based on road transportation illustrates an application of such new ways to treat human reliability of a CPHS by taking into account the integration of different CPS into a car, i.e., a Speed Control System (SCS) and an Adaptive Cruise Control (ACC) that replaces the previous SCS.

Section snippets

Dissonance oriented stability analysis

Human stability relates to the equilibrium of human behaviors, i.e. human behaviors or their consequences remain relatively constant around a threshold value or an interval of values whatever the disturbances that occur. Out of this equilibrium, human behaviors or their consequences are unstable. The threshold value or the interval of values can be determined qualitatively or quantitatively by taking into account intrinsic and extrinsic factors such as technical factors, human factors,

Dissonance identification

The frames of reference support the interpretation of stable or unstable gaps in terms of dissonances. The case of serendipity is an example of a dissonance related to a conflict of goal when the frame of reference is wrong. Indeed, it is a way to find results fortuitously and the initial goal of the work was wrong. A lack of knowledge relates to another dissonance without any frame of reference. When human operators have to control unprecedented situations, they have to react rapidly in case

Dissonance control

The dissonance control process consists in assessing the impact of dissonances, in rejecting or accepting them and in reinforcing the frames of reference accordingly. The reinforcement of the CPHS frame of reference can use several mechanisms:

  • The rejection mode without any modification of the frame of reference. When the control of a dissonance is considered as difficult, irrelevant, useless or unacceptable, it is then easier to refuse to apply any revision of the frames of reference.

  • The

Conclusion

This paper has detailed new challenges for human reliability study in CPHS. It is based on an original new framework for human reliability based on stability and dissonance. Resilience is defined as the capacity of a system to control stability and instability successfully. Three main topics are then discussed in order to make a CPHS resilient by integrating positive and negative contributions of human operators: the dissonance oriented stability analysis, the dissonance identification, and the

Acknowledgements

The present research work is supported by the International Research Network on Human-Machine Systems in Transportation and Industry (GDR I HAMASYTI). The author gratefully acknowledges the support of this network.

References (77)

  • P. Millot et al.

    A common work space for a mutual enrichment of human-machine cooperation and team-situation awareness

  • K.H. Orwin et al.

    New indices for quantifying the resistance and resilience of soil biota to exogenous disturbances

    Soil Biology & Biochemistry

    (2004)
  • J.V. Petrocelli et al.

    Perceiving stability as a means to attitude certainty: The role of implicit theories of attitudes

    Journal of Experimental Social Psychology

    (2010)
  • P. Polet et al.

    Modelling Border-line tolerated conditions of use (BTCUs) and associated risks

    Safety Science

    (2003)
  • P. Polet et al.

    Iterative learning control based tools to learn from human error

    Engineering Applications of Artificial Intelligence

    (2012)
  • B. Reer

    Review of advanced in human reliability analysis of errors of commission – Part 2: EOC quantification

  • P. Richard et al.

    Human stability: Toward multi-level control of human behavior

  • J. Ruault et al.

    Sociotechnical systems resilience: A dissonance engineering point of view

    IFAC-PapersOnLine

    (2013)
  • J. Rushby

    Using model checking to help discover mode confusions and other automation surprises

    Reliability Engineering & System Safety

    (2002)
  • K. Sedki et al.

    Using the BCD model for risk analysis: An influence diagram based approach

    Engineering Applications of Artificial Intelligence

    (2013)
  • M.D. Seery

    Challenge or threat? Cardiovascular indexes of resilience and vulnerability to potential stress in humans

    Neuroscience and Biobehavioral Reviews

    (2011)
  • E.E. Telci et al.

    The theory of cognitive dissonance: A marketing and management perspective

    Procedia Social and Behavioral Sciences

    (2011)
  • F. Vanderhaegen

    Multilevel organization design: The case of the air traffic control

    Control Engineering Practice

    (1997)
  • F. Vanderhaegen

    Toward a model of unreliability to study error prevention supports

    Interacting With Computers

    (1999)
  • F. Vanderhaegen

    A non-probabilistic prospective and retrospective human reliability analysis method – Application to railway system

    Reliability Engineering and System Safety

    (2001)
  • F. Vanderhaegen

    Toward a petri net based model to control conflicts of autonomy between Cyber-Physical&Human-Systems

    IFAC-PapersOnLine

    (2016)
  • F. Vanderhaegen

    A rule-based support system for dissonance discovery and control applied to car driving

    Expert Systems With Applications

    (2016)
  • F. Vanderhaegen

    Mirror effect based learning systems to predict human errors – Application to the Air Traffic Control

    IFAC-PapersOnLine

    (2016)
  • F. Vanderhaegen et al.

    Efficiency of safety barriers facing human errors

  • F. Vanderhaegen et al.

    A multi-viewpoint system to support abductive reasoning

    Information Sciences

    (2011)
  • F. Vanderhaegen et al.

    Reinforced learning systems based on merged and cumulative knowledge to predict human actions

    Information Sciences

    (2014)
  • F. Vanderhaegen et al.

    A reinforced iterative formalism to learn from human errors and uncertainty

    Engineering Applications and Artificial Intelligence

    (2009)
  • F. Vanderhaegen et al.

    A Benefit/Cost/Deficit (BCD) model for learning from human errors

    Reliability Engineering & System Safety

    (2011)
  • S. Zieba et al.

    Using adjustable autonomy and human-machine cooperation for the resilience of a human-machine system – Application to a ground robotic system

    Information Sciences

    (2011)
  • E. Aïmeur

    Application and assessment of cognitive dissonance – Theory in the learning process

    Journal of Universal Computer Science

    (1998)
  • R. Amalberti

    Human error at the centre of the debate on safety

  • J. Bell et al.

    Review of human reliability assessment methods

    (2009)
  • W. Ben Yahia et al.

    A2PG: alternative action plan generator

    Cognition, Technology & Work

    (2015)
  • Cited by (71)

    • A dynamic human-factor risk model to analyze safety in sociotechnical systems

      2022, Process Safety and Environmental Protection
      Citation Excerpt :

      It can happen when something sounds incorrect, i.e., it will be, is, maybe or was not correct, and be explained as gaps or conflicts between the individual or collective knowledge (Vanderhaegen, 2016). Erroneous affordances and contradictory knowledge are two salient types of dissonance in human reliability and risk analysis (Vanderhaegen, 2017, 2014). The dissonance discovery and control include the dissonances’ influence evaluation, accepting or refusing dissonances, reinforcing the frames of reference, and subsequently improving the system resilience.

    • Heuristic-based method for conflict discovery of shared control between humans and autonomous systems - A driving automation case study

      2021, Robotics and Autonomous Systems
      Citation Excerpt :

      The identification of unforeseen conflicts of autonomy could be an efficient tool for making human–systems organization resilient in the face of unprecedented situations [9,106] or for discovering and assessing interferences between the behavior of humans and autonomous systems [97,98,107,108]). Moreover, future autonomous systems may be designed by assessing the negative and positive impacts of possible conflicting decisions in terms of benefits, costs, and possible dangers or deficits related to economical, green, or ethical criteria and human, technical and organizational factors, for example [9,109–112]. By taking into account these multi-factor effects of conflicting decisions or sources of conflicts, the study of autonomous systems will also concern research on the concept of human-machine dissonance by addressing possible wrong CAP parameters and human–systems inclusion by taking into account the impacts of autonomous systems on a majority of users but also minorities regardless of the variability of their CAP parameters over time.

    View all citing articles on Scopus
    View full text