Review articleTowards increased systems resilience: New challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems
Introduction
Systems have to be designed not for prohibiting individual unsafe acts but for preventing human error occurrence or reducing their potential consequences by specifying adequate barriers or defenses (Reason, 2000). Human error can then be seen as a consequence of a failed defense instead of a cause of an unsafe event. However, more than 70% of accidents remain due to human errors and 100% of them are directly or indirectly linked with human factors (Amalberti, 2013). Moreover, even if a technical system such as a Speed Control System (SCS) is designed to improve the safety or the comfort of the car driver, its use can produce unsafe situations due to the reduction of the inter-distance between cars, the increasing of the reaction time or the decreasing of the human vigilance (Dufour, 2014). This paper proposes some ways to analyze such dilemma between the design and the use of a system. It extends the proposal by presenting some new challenges for assessing and controlling human reliability of Cyber-Physical&Human Systems (CPHS) where a Cyber-Physical System (CPS) or several CPS interact with human operators. It is an extension of the plenary session given by the author at the first IFAC conference on CPHS entitled “Human reliability and Cyber-Physical&Human Systems” in Brazil. Based on the author's experience and on literature reviews, several challenges are discussed and motivate a new framework proposal for human reliability study in CPHS.
Human reliability usually confronts the problem of its definition and its assessment in the course of the design, the analysis or the evaluation of CPHS such as human-machine systems, joint cognitive systems, systems of systems, socio-technical systems, multi-agent systems, manufacturing systems or cybernetic systems. Human reliability of CPHS may be defined by distinguishing two sets of frames of reference or baselines: (1) the frame related to what the human operators are supposed to do, i.e. their prescriptions, (2) the frame related to what they do outside these prescriptions. Human reliability can then be seen as the capacity of human operators to realize successfully the tasks required by their prescriptions and the additional tasks, during an interval of time or at a given time. Human error is usually considered as the negative view of human behaviors: it is the capacity of human operators not to realize correctly their required tasks or the additional tasks. Methods for analyzing human reliability exist and are well explained and discussed on published states-of-the-art (Bell and Holroyd, 2009, Hickling and Bowie, 2013, Kirwan, 1997a, Kirwan, 1997b, Pan et al., 2016, Reer, 2008, Straeter et al., 2012, Swain, 1990, Vanderhaegen, 2001, Vanderhaegen, 2010). They consider mainly the first set of tasks, i.e. they study the possible human errors related to what the users are supposed to do. Human reliability assessment methods remain unsuitable or insufficient, and new developments have to be done considering new constraints such as the dynamic evolution of a system upon the time, the variability of a human operator or between human operators, or the creativity of human operators who are capable to modify the use of a system or to invent new uses. Regarding such new requirements for human reliability study, many contributions present the concept of resilience as an important issue for organization management and for controlling criteria such as safety, security, ethics, health or survival (Engle et al., 1996, Hale and Heijer, 2006, Hollnagel, 2006, Khaitan and McCalley, 2015, Orwin and Wardle, 2004, Pillay, 2016, Ruault et al., 2013, Seery, 2011, Wreathall, 2006). Resilience is usually linked with the system stability, and it is defined as the ability or the natural mechanisms of a CPHS to adjust its functioning after disturbances or aggressions, in order to maintain its stable state, to come back to a stable state or to recover from an instable state. The more stable a system, the less uncertain the human attitudes related to beliefs and intentions (Petrocelli, Clarkson, Tormala, & Hendrix, 2010). On the other hand, other studies present the organizational stability as an obstacle for being resilient, and the instability as an advantage to survive (Holling, 1973, Holling, 1996, Lundberg and Johansson, 2006). Then, a system such as a CPHS with regular important variations that provoke its instability may survive and be resilient for a long period of time, whereas an isolated stable CPHS that does not interact with others may not be resilient when an external aggression occurs and makes it instable.
This paper proposes new challenges for the human reliability study of CPHS based on the above mentioned concept of stability applied to human behaviors. The analysis of human stability is interpreted in terms of dissonances and the successful control of these dissonances makes the CPHS resilient. The concept of dissonance is adapted from Festinger (1957) and Kervern (1994), and a dissonance is a conflict of stability. Three main topics are then discussed in Sections 2–4 respectively: the dissonance oriented stability analysis, the dissonance identification, and the dissonance control. In parallel, a case study based on road transportation illustrates an application of such new ways to treat human reliability of a CPHS by taking into account the integration of different CPS into a car, i.e., a Speed Control System (SCS) and an Adaptive Cruise Control (ACC) that replaces the previous SCS.
Section snippets
Dissonance oriented stability analysis
Human stability relates to the equilibrium of human behaviors, i.e. human behaviors or their consequences remain relatively constant around a threshold value or an interval of values whatever the disturbances that occur. Out of this equilibrium, human behaviors or their consequences are unstable. The threshold value or the interval of values can be determined qualitatively or quantitatively by taking into account intrinsic and extrinsic factors such as technical factors, human factors,
Dissonance identification
The frames of reference support the interpretation of stable or unstable gaps in terms of dissonances. The case of serendipity is an example of a dissonance related to a conflict of goal when the frame of reference is wrong. Indeed, it is a way to find results fortuitously and the initial goal of the work was wrong. A lack of knowledge relates to another dissonance without any frame of reference. When human operators have to control unprecedented situations, they have to react rapidly in case
Dissonance control
The dissonance control process consists in assessing the impact of dissonances, in rejecting or accepting them and in reinforcing the frames of reference accordingly. The reinforcement of the CPHS frame of reference can use several mechanisms:
- •
The rejection mode without any modification of the frame of reference. When the control of a dissonance is considered as difficult, irrelevant, useless or unacceptable, it is then easier to refuse to apply any revision of the frames of reference.
- •
The
Conclusion
This paper has detailed new challenges for human reliability study in CPHS. It is based on an original new framework for human reliability based on stability and dissonance. Resilience is defined as the capacity of a system to control stability and instability successfully. Three main topics are then discussed in order to make a CPHS resilient by integrating positive and negative contributions of human operators: the dissonance oriented stability analysis, the dissonance identification, and the
Acknowledgements
The present research work is supported by the International Research Network on Human-Machine Systems in Transportation and Industry (GDR I HAMASYTI). The author gratefully acknowledges the support of this network.
References (77)
- et al.
Knowledge discovery using genetic algorithm for maritime situational awareness
Expert Systems with Applications
(2014) - et al.
Cognitive conflict in human-automation interactions: A psychophysiological study
Applied Ergonomics
(2012) - et al.
A hybrid reinforced learning system to estimate resilience indicators
Engineering Applications of Artificial Intelligence
(2017) - et al.
Child development: Vulnerability and resilience
Social Science & Medicine
(1996) Smart collaboration between humans and machines based on mutual understanding
Annual Reviews in Control
(2008)- et al.
Quantification of performance shaping factors (PSFs)’ weightings for human reliability analysis (HRA) of low power and shutdown (LPSD) operations
Annals of Nuclear Energy
(2017) Validation of human reliability assessment techniques: Part1 – Validation issues
Safety Science
(1997)Validation of human reliability assessment techniques: Part2 – Validation results
Safety Science
(1997)- et al.
Eco-driving command for tram-driver system
IFAC-PapersOnLine
(2016) - et al.
Why anamorphoses look as they do: An experimental study
Acta Psychologica
(1991)
A common work space for a mutual enrichment of human-machine cooperation and team-situation awareness
New indices for quantifying the resistance and resilience of soil biota to exogenous disturbances
Soil Biology & Biochemistry
Perceiving stability as a means to attitude certainty: The role of implicit theories of attitudes
Journal of Experimental Social Psychology
Modelling Border-line tolerated conditions of use (BTCUs) and associated risks
Safety Science
Iterative learning control based tools to learn from human error
Engineering Applications of Artificial Intelligence
Review of advanced in human reliability analysis of errors of commission – Part 2: EOC quantification
Human stability: Toward multi-level control of human behavior
Sociotechnical systems resilience: A dissonance engineering point of view
IFAC-PapersOnLine
Using model checking to help discover mode confusions and other automation surprises
Reliability Engineering & System Safety
Using the BCD model for risk analysis: An influence diagram based approach
Engineering Applications of Artificial Intelligence
Challenge or threat? Cardiovascular indexes of resilience and vulnerability to potential stress in humans
Neuroscience and Biobehavioral Reviews
The theory of cognitive dissonance: A marketing and management perspective
Procedia Social and Behavioral Sciences
Multilevel organization design: The case of the air traffic control
Control Engineering Practice
Toward a model of unreliability to study error prevention supports
Interacting With Computers
A non-probabilistic prospective and retrospective human reliability analysis method – Application to railway system
Reliability Engineering and System Safety
Toward a petri net based model to control conflicts of autonomy between Cyber-Physical&Human-Systems
IFAC-PapersOnLine
A rule-based support system for dissonance discovery and control applied to car driving
Expert Systems With Applications
Mirror effect based learning systems to predict human errors – Application to the Air Traffic Control
IFAC-PapersOnLine
Efficiency of safety barriers facing human errors
A multi-viewpoint system to support abductive reasoning
Information Sciences
Reinforced learning systems based on merged and cumulative knowledge to predict human actions
Information Sciences
A reinforced iterative formalism to learn from human errors and uncertainty
Engineering Applications and Artificial Intelligence
A Benefit/Cost/Deficit (BCD) model for learning from human errors
Reliability Engineering & System Safety
Using adjustable autonomy and human-machine cooperation for the resilience of a human-machine system – Application to a ground robotic system
Information Sciences
Application and assessment of cognitive dissonance – Theory in the learning process
Journal of Universal Computer Science
Human error at the centre of the debate on safety
Review of human reliability assessment methods
A2PG: alternative action plan generator
Cognition, Technology & Work
Cited by (71)
Repeatable effects of synchronizing perceptual tasks with heartbeat on perception-driven situation awareness
2023, Cognitive Systems ResearchA dynamic human-factor risk model to analyze safety in sociotechnical systems
2022, Process Safety and Environmental ProtectionCitation Excerpt :It can happen when something sounds incorrect, i.e., it will be, is, maybe or was not correct, and be explained as gaps or conflicts between the individual or collective knowledge (Vanderhaegen, 2016). Erroneous affordances and contradictory knowledge are two salient types of dissonance in human reliability and risk analysis (Vanderhaegen, 2017, 2014). The dissonance discovery and control include the dissonances’ influence evaluation, accepting or refusing dissonances, reinforcing the frames of reference, and subsequently improving the system resilience.
Toward human-centric smart manufacturing: A human-cyber-physical systems (HCPS) perspective
2022, Journal of Manufacturing SystemsTowards a Holistic Framework for Digital Twins of Human-Machine Systems
2022, IFAC-PapersOnLineHeuristic-based method for conflict discovery of shared control between humans and autonomous systems - A driving automation case study
2021, Robotics and Autonomous SystemsCitation Excerpt :The identification of unforeseen conflicts of autonomy could be an efficient tool for making human–systems organization resilient in the face of unprecedented situations [9,106] or for discovering and assessing interferences between the behavior of humans and autonomous systems [97,98,107,108]). Moreover, future autonomous systems may be designed by assessing the negative and positive impacts of possible conflicting decisions in terms of benefits, costs, and possible dangers or deficits related to economical, green, or ethical criteria and human, technical and organizational factors, for example [9,109–112]. By taking into account these multi-factor effects of conflicting decisions or sources of conflicts, the study of autonomous systems will also concern research on the concept of human-machine dissonance by addressing possible wrong CAP parameters and human–systems inclusion by taking into account the impacts of autonomous systems on a majority of users but also minorities regardless of the variability of their CAP parameters over time.