A knowledge-driven approach for process supervision in chemical plants
Introduction
Process supervision and fault diagnosis deals with the detection and isolation of abnormal events. It consists of interpreting the current state of the plant from sensor readings and process knowledge. Fault diagnosis is of crucial importance in terms of safety and also of economics, because of the influence of abnormal events in yield and quality of products.
Usually, process operators with the support of control alarms (i.e. simple univariate monitoring alarms), are who manage the abnormal situations. The complexity of modern processes makes difficult the prediction of abnormal events since many different things can go wrong in many different ways. However, control alarms do not give information about root causes of faults, they only inform about some particular deviation. As a consequence, in the last years, intelligent, real-time supervision and fault diagnosis systems arise as the necessary tools to help operators in dealing with abnormal situations. Such a system has to work in real time, detecting abnormal events as soon as possible and informing the operator about the deviations, the root causes and suggesting solutions. Therefore, appropriate actions can be taken in real time in order to avoid faults propagations and tending to reduce possible consequences of abnormal events that may occur.
Many process supervision methods have been reported in literature. These methods can be roughly classified in three sets: quantitative model based methods, qualitative model based methods and historical based methods. Quantitative model based methods use quantitative techniques such as bond graphs models (Ould-Bouamama, El Harabi, Abdelkrim, & Ben Gayed, 2012), observers (Patton & Chen, 1997), and Kalman filters ([Bhagwat et al., 2003], [Villez et al., 2011]) all of them based on deep process knowledge. Qualitative model based diagnosis has been proposed by Venkatasubramanian and Rich (1988) and Ram Maurya, Rengaswamy, and Venkatasubramanian (2004) using digraph casual models. Historical based methods where presented by Sundarraman and Srinivasan (2003), Maurya, Paritosh, Rengaswamy, and Venkatasubramanian (2010), and Villez, Rosén, Anctil, Duchesne, and Vanrolleghem (2013) using trend analysis and by Moore and Kramer (1986), Muthuswamy and Srinivasan (2003), and Qian, Li, Jiang, and Wen (2003) using expert systems for multivariate inference. Statistical methods ([MacGregor and Cinar, 2012], [Yu, 2012], [Zhang et al., 2011]) and neural networks (Srinivasan, Wang, Ho, & Lim, 2005) are examples of quantitative historical based methods techniques. Additionally, several methods where combined in order to find the proper synergies to amplify the strengths while avoiding the shortcomings intrinsic to each methodology, such as the lack of transparency, extensibility, novel fault detection, system adaptation and robustness ([Ghosh et al., 2011], [Musulin et al., 2006], [Wang et al., 2012]). It is not intended to give a complete bibliography of this vast topic, for a more complete review, the reader is referred to the excellent articles of Venkatasubramanian, Rengaswamy, and Kavuri (2003), Venkatasubramanian, Rengaswamy, Kavuri, and Yin (2003), and Venkatasubramanian, Rengaswamy, Yin, and Kavuri (2003) and Maurya, Rengaswamy, and Venkatasubramanian (2007).
Fault detection and diagnosis cannot improve safety and reliability for themselves. A fault evaluation must be performed to classify faults into different hazard classes. Hazards are undesirable system conditions with the potential to cause or to contribute to a damage or accident (Leveson, 1995). Since no action can be made to avoid, or reduce, the effects of unidentified hazards, a process hazard analysis must be performed in advance to the fault diagnosis task. HAZOP (Gillett, 1997) is the most widespread inductive technique to identify hazards and to evaluate possible scenarios leading to unwanted consequences. Since HAZOP is a time consuming and labor-intensive activity many researches have attempted to develop expert systems to automate HAZOP analysis.
One of the first attempts to automate HAZOP was the pioneer work of [Parmar and Lees, 1987a], [Parmar and Lees, 1987b], they presented a rule based algorithm to model and propagate faults for hazards identification. Later on, in a series of works ([Vaidhyanathan and Venkatasubramanian, 1995], [Venkatasubramanian and Vaidhyanathan, 1994]) Venkatasubramanian and his colleagues developed and evolved a model-based framework and an expert system called HAZOPExpert using G2 Gensym system. In Vaidhyanathan and Venkatasubramanian (1996) the authors proposed a semi-quantitative reasoning methodology to rank and filter HAZOPExpert results deemed as minor or unrealistic by the human experts. [Zhao et al., 2005a], [Zhao et al., 2005b] introduced a high quality full-scale software system (PHASuite) for automated HAZOP analysis. PHASuite was based on a knowledge engineer framework, including a reasoning engine based on Petri Nets. Finally, Zhao, Cui, Zhao, Qiu, and Chen (2009) integrated six existent ontologies in a Case-Based Reasoning system that automatically generates HAZOP analysis. Intelligent HAZOP analysis approaches has been extensively reviewed in Venkatasubramanian, Zhao, and Viswanathan (2000), Zhao et al. (2005a), and Dunjó, Fthenakis, Vílchez, and Arnaldos (2010).
One important limitation to the generalized adoption of advanced process supervision tools is the necessity of highly customized solutions. In each process, new models have to be developed and a huge amount of data must be analyzed, resulting in an intensive task in terms of time, effort and money. In order to develop more generalized and transparent solutions, the interest in reusable knowledge models arise.
Usually knowledge-based supervision methods ([Moore and Kramer, 1986], [Musulin et al., 2006], [Muthuswamy and Srinivasan, 2003], [Qian et al., 2003], [Ruiz et al., 2001]) employ a high number of rules to detect deviations and to classify them into the corresponding fault classes. These methods typically adopt a rule-based knowledge representation scheme or a programming-logic framework. In plant-wide implementations, these rules are distributed all over the information system with little or no logic linkage between them. Hence any modification on the rules bases could generate inconsistencies since its semantic impact in the rest of the system remains hidden.
On account of these drawbacks, ontologies attracted the attention in the Process Systems Engineering (PSE) field as a convenient means for knowledge representation ([Gernaey and Gani, 2010], [Morbach et al., 2009], [Natarajan et al., 2012], [Zhao et al., 2012]). The term ontology denotes a conceptual data schema that represents the relevant domain entities and their relations. The widely accepted definition of ontology is “a formal, explicit specification of a shared conceptualization” (Gruber, 1993). Ontologies depend on existing information sources, incorporating semantics thus making the stored information richer and meaningful. One of the most important potentialities derived from the implementation of a high-quality formal ontology is reasoning. Once the information has been properly described, a software reasoner can check consistency and infer new facts.
Nevertheless, making a high-quality ontology is not an easy task as several design considerations must be taken into account. In the PSE context, a knowledge model should support the requirements of integrated software applications meanwhile correctly capturing the domain semantics. The last is critical to successfully exploit domain knowledge and also has the greatest difficulties as concepts themselves involve very complex constructions like ambiguities, synonyms, homonyms, vagueness, etc. Moreover, the meaning of concepts is highly dependent on context. When obviated, this situation leads to the development of pseudo-ontologies (Morbach et al., 2009); formal representation schemes that do not consider the principles of integration and reuse of the knowledge base (KB). However, the most important differentiating factor is not the reusability of the respective structure, but its semantic richness. From this point of view, ontologies can be classified in two types (Corcho, Fernández-López, & Gómez-Pérez, 2006): Lightweight ontologies are mainly concepts taxonomies that include relationships between concepts but do not include axiomatic definitions to formally define the semantics of its terms. Due to their simple internal design they are not considered full-fledged ontologies. On the other hand, heavyweight ontologies model the domain semantics in a deeper way by adding restrictions to lightweight ontologies. Those restrictions (i.e. axioms and constraints) clarify the intended meaning of the terms involved into the ontology.
Consequently, literature presents two distinct approaches to employ ontologies in model design (Villa, Athanasiadis, & Rizzoli, 2009). In the semantic mediation approach, concepts from ontologies supplement conventional data and models to facilitate information integration and reuse. In the knowledge-driven approach, datasets and models are directly represented as instances of ontologies and embody a statement of the system conceptualization, enabling machine reasoning about the system structure that can lead to more sophisticated applications.
Nevertheless, in PSE domain, most of the proposed ontology-based frameworks involve pseudo-ontologies or lightweight ontologies sometimes as mere technological update of previous works. Moreover many of these knowledge models ([Vaidhyanathan and Venkatasubramanian, 1995], [Venkatasubramanian and Vaidhyanathan, 1994], [Zhao et al., 2005a], [Zhao et al., 2005b]) were designed following a semantic mediation or they were not formally implemented.
One of the most remarkable works in PSE domain is OntoCAPE; a large heavyweight ontology reported by Marquardt, Morbach, Wiesner, and Yang (2010). OntoCAPE has been developed in a multi-layer architecture with conceptualization of mathematical models, chemical processes, equipment configurations and control systems. It is used for annotation of electronic documents and data storages in order to obtain a consistent representation of these heterogeneous information sources. Batres and Naka (2000) and Batres, Aoyama, and Naka (2002) have presented a multi-dimensional formalism (MDF) that contains four ontologies for modeling the structure, materials and behavior of chemicals plants. Due to an early collaboration between both research teams, MDF and OntoCAPE share some representation schemes.
Some authors have extended OntoCAPE in order to support several applications: Wiesner, Morbach, and Marquardt (2011) present a prototypical ontology-based software tool for the integration and consolidation of distributed design data in chemical process engineering. The authors deals with some efficiency shortcoming on the reasoning process; so their implementation requires powerful CPU clusters to run acceptably fast. Recently Natarajan et al. (2012) presented an extension of OntoCAPE, called OntoSafe, with the necessary information to evaluate process states and conditions against potential faults. OntoSafe adds to OntoCAPE a partial model with a few new concepts to support a multi agent system (ENCORE). However, the reasoning capabilities remain in ENCORE which is responsible for the supervision tasks.
In this work, a knowledge-driven approach to on-line process supervision and diagnosis is presented. Supervision tasks are performed by reasoning on ontology-based knowledge models. The proposed KBS captures process events and identifies possible deviations. Additionally, a HAZOP conceptualization has been developed to support Hazard management. A prototype has been implemented using semantic web technologies following the W3C standards. With a Description Logic (Krötzsch, Simancik, & Horrocks, 2012) formalization of a heavyweight ontology it is also intended to replace the use of horn-like rules by the proper axioms of each supervision class, and use a semantic reasoner to automatically detect and classify process faults.
The paper is organized as follows. The Ontology and its constitutive modules are explained in Section 2. Knowledge exploitation features are illustrated in Section 3 on the Tennessee Eastman process. Finally, some concluding remarks are given in Section 4.
Section snippets
The knowledge model
In this paper, it is argued that the domain ontology should be developed not only in the pursuit of a general and reusable knowledge representation of the domain concepts. In addition, it must be prepared to support the process supervision tasks in a specific plant with minimal implementation/adaptation effort. These goals involve two ontological design principles (Corcho et al., 2006): usability (i.e. usefulness for a specific task) and reusability (i.e. adaptability to different application
Knowledge exploitation on the Tennessee Eastman process
The proposed approach has been applied to the Tennessee Eastman (TE) process.(Downs & Vogel, 1993). This benchmark provides a rich configuration of equipment and control loops that enables the validation of the main functionality of the proposed KBS.
The TE involves the production of two gas products; G and H, from four liquid reactants, A, C, D and E. Additionally, there occur two side reactions and an inert B. All reactions are irreversible and exothermic. The process has five main units: an
Concluding remarks
In this work, an ontology-based framework for process supervision in chemical plants has been presented.
A conceptualization of equipment, control systems and hazards, have been developed. It includes the semantic of each term in order to obtain a heavyweight ontology that have been formalized using Description Logic. A knowledge-driven approach has been adopted in order to demonstrate how DL reasoning could be used to support process supervision without the help of external agents.
In the
Acknowledgement
Financial support from CONICET, ANPCyT and FCEIA-UNR is fully acknowledged by the authors.
References (57)
- et al.
A life-cycle approach for model reuse and exchange
Computers and Chemical Engineering
(2002) - et al.
Fault detection during process transitions: A model-based approach
Chemical Engineering Science
(2003) - et al.
A plant-wide industrial process control problem
Computers and Chemical Engineering
(1993) - et al.
Hazard and operability (HAZOP) analysis. A literature review
Journal of Hazardous Materials
(2010) - et al.
A model-based systems approach to pharmaceutical product-process design and analysis
Chemical Engineering Science
(2010) - et al.
Evaluation of decision fusion strategies for effective collaboration among heterogeneous fault diagnostic methods
Computers and Chemical Engineering
(2011) A translation approach to portable ontology specifications
Knowledge Acquisition
(1993)- et al.
Monitoring, fault diagnosis, fault-tolerant control and optimization: Data driven methods
Computers and Chemical Engineering
(2012) - et al.
A framework for on-line trend extraction and fault diagnosis
Engineering Applications of Artificial Intelligence
(2010) - et al.
Fault diagnosis using dynamic trend analysis: A review and recent developments
Engineering Applications of Artificial Intelligence
(2007)
Base control problem for the Tennessee Eastman problem
Computers and Chemical Engineering
Onto cape-a (re-) usable ontology for computer-aided process engineering
Computers and Chemical Engineering
Phase-based supervisory control for fermentation process development
Journal of Process Control
An ontology for distributed process supervision of large-scale chemical plants
Computers and Chemical Engineering
Bond graphs for the diagnosis of chemical processes
Computers and Chemical Engineering
The propagation of faults in process plants: Hazard identification for a water separator system (part ii)
Reliability Engineering
The propagation of faults in process plants: Hazard identification (part i)
Reliability Engineering
Observer-based fault detection and isolation: Robustness and applications
Control Engineering Practice
An expert system for real-time fault diagnosis of complex chemical processes
Expert Systems with Applications
Application of signed digraphs-based analysis for fault diagnosis of chemical process flowsheets
Engineering Applications of Artificial Intelligence
Fault diagnosis support system for complex chemical plants
Computers and Chemical Engineering
Context-based recognition of process states using neural networks
Chemical Engineering Science
Monitoring transitions in chemical plants using enhanced trend analysis
Computers and Chemical Engineering
Digraph-based models for automated HAZOP analysis
Reliability Engineering and System Safety
A semi-quantitative reasoning methodology for filtering and ranking HAZOP results in HAZOPExpert
Reliability Engineering and System Safety
A review of process fault detection and diagnosis, Part II: Qualitative model and search strategies
Computers and Chemical Engineering
A review of process fault detection and diagnosis, Part III: Process history based methods
Computers and Chemical Engineering
A review of process fault detection and diagnosis, Part I: Quantitative model-based methods
Computers and Chemical Engineering
Cited by (16)
ElChemo: A cross-domain interoperability between chemical and electrical systems in a plant
2022, Computers and Chemical EngineeringAutomatic modeling and fault diagnosis of car production lines based on first-principle qualitative mechanics and semantic web technology
2021, Advanced Engineering InformaticsCitation Excerpt :Specifically for fault diagnosis, there has been work using semantic data models for fault diagnosis in a train door system [26], power transformers [27], autonomous underwater vehicles [28], chemical plants [29], fleets of ships [30], wind turbines [5], machine tools [31], loaders [32], centrifugal pumps [33] and rolling bearings [34]. For example, a semantic ontology was manually constructed using the Web Ontology Language (OWL) in Protege software for the Tennessee Eastman chemical process and then used by the Pellet descriptive logic reasoner for fault diagnosis [29]. Another semantic ontology was manually constructed using OWL in Protege software for a wind turbine, which was then used by the virtue of Java Expert Shell System for diagnosis of faults related to bearing, gear, and main shaft [5].
Using immune designed ontologies to monitor disruptions in manufacturing systems
2016, Computers in IndustryCitation Excerpt :Define, describe, and standardize vocabulary, concepts and relations between system activities and components [65–67] Design multi agent and knowledge based systems for supervision, monitoring and control [30,33,68–70] Recently, based on biological immune concepts, Darmoul et al. [32] designed a generic ontology framework to capture and structure knowledge related to disruptions and risks in manufacturing systems.
Nonlinear parametric predictive control for the temperature control of bench-scale batch reactor
2016, Applied Thermal EngineeringCitation Excerpt :However, this approach is ineffective not only in reducing often significant batch-to-batch variations, but also it results in large amounts of off-specification products because of the delay [3,5,15]. To manipulate the temperature of bench-scaled reactor, many heating/cooling system configurations are cited in the literature [12,14,33]. Two common types are the alternative heating/cooling system and the mono-fluid system.
An Ontology-based Framework to Support Multivariate Qualitative Data Analysis
2014, Computer Aided Chemical EngineeringBig data and machine learning: A roadmap towards smart plants
2022, Frontiers of Engineering Management