Active learning surrogate models for the conception of systems with multiple failure modes

https://doi.org/10.1016/j.ress.2015.12.017Get rights and content

Highlights

  • An iterative method to identify the limits of a system is proposed.

  • The method is based on nested Gaussian process surrogate models.

  • A new selection criterion that is adapted to the system case is presented.

  • The interest of the method is illustrated on an analytical example.

Abstract

Due to the performance and certification criteria, complex mechanical systems have to taken into account several constraints, which can be associated with a series of performance functions. Different software are generally used to evaluate such functions, whose computational cost can vary a lot. In conception or reliability analysis, we thus are interested in the identification of the boundaries of the domain where all these constraints are satisfied, at the minimal total computational cost. To this end, the present work proposes an iterative method to maximize the knowledge about these limits while trying to minimize the required number of evaluations of each performance function. This method is based first on Gaussian process surrogate models that are defined on nested sub-spaces, and second, on an original selection criterion that takes into account the computational cost associated with each performance function. After presenting the theoretical basis of this approach, this paper compares its efficiency to alternative methods on an example.

Introduction

The conception (or risk assessment) of complex mechanical systems has to take into account a series of constraints. Such constraints can be due to certification criteria, performance objectives, cost limitations, and so on. In this context, the role of simulation has kept increasing for the last decades, as one should be able to predict if a given configuration of the system is likely to fulfil these constraints without having to build it and to test it experimentally. In many cases, the computation of these constraints is associated with a series of computer software, whose physics can vary a lot. For instance, in the car industry, the conception of a new vehicle can be subjected to constraints on its size and weight, which are rather easy to compute, but also on its emergency stopping distance, its crash or aerodynamic resistance, which can be much more difficult to evaluate.

To be more precise, let us consider a particular system, S, which design is supposed to be characterized by a vector of d parameters, x=(x1,,xd)Rd. It is assumed that the system constraints can be evaluated from the computation of N1 performance functions, {gn,1nN}, which respective numerical cost (in CPU time for instance), Cn, are supposed to be sorted in an ascending order:C1C2CN.

Thus, the conception domain, which is denoted by Ω and which defines the set of admissible designs for the considered system, can be written as:Ω=n=1NΩn,Ωn={xRd,gn(x)0}.

Such a domain is a key element to perform optimizations of the system restricted to admissible design solutions, while being closely linked to reliability analysis prospects, as its complementary, RdΩ, corresponds to the failure domain of the system. Hence, for the last decades, the identification of Ω, or of its boundary, Ω, has motivated the development of several methods, which can be sorted in two main categories: the direct and the indirect methods. Among the direct methods, the first-order or second-order reliability methods (FORM/SORM) approximate Ω as a linear or a second-order polynomial function [9], [14], [13], [4]. When confronted to applications where the limit state is multimodal or is strongly non-linear, alternative methods based on more advanced approximations have been introduced, such as support vector machines (SVM) techniques [19], [17], [11] and methods based on generalized least-squares linear regression [18], [10].

On the other hand, the indirect methods focus on the approximation of the performance functions to deduce in a second step the searched boundary. Among these methods, the Gaussian process regression (GPR) method, or kriging, keeps playing a major role, which is due to its ability to provide a robust approximation of Ω, that is to say for which precision can be quantified [15], [12], [16], [7].

Based on this very efficient tool, the idea of this paper is to present a sequential sampling strategy to minimize the uncertainties about boundary Ω, at the minimal computational budget. In particular, the proposed strategy will take into account the computational costs associated with the evaluation of each function, {C1,,CN}.

The outline of this work is as follows. First, Section 2 presents the theoretical bases of the Gaussian process regression (GPR) and its use for the identification of limit states. The proposed method is then introduced in Section 3. Then, the efficiency of the method is illustrated on an analytic example in Section 4.

Section snippets

Surrogate models for system reliability

The Gaussian process regression is based on the assumption that each performance function, gn, 1nN, can be seen as a sample path of a stochastic process, which is supposed to be Gaussian for the sake of tractability. By conditioning this Gaussian process by a set of Q1 code evaluations, Slearn={(x(q),gn(x(q))),1qQ}, it is possible to define very interesting predictors for the value of gn in any non-computed point of the input space. These predictors of functions gn at any x in Rd, which

Nested GPR-based surrogate models for system conception and reliability analysis

In the adaptations of the EGRA and of the AK-MCS procedures, no attention is paid to the computational cost associated with each performance function gn. Moreover, the definition domains of each surrogate model, g^n, are the same (the entire space for EGRA or a finite set of input candidates for AK-SYS), which can be overly expensive. This motivates the introduction of a new adaptation for the approximation of Ω using GPR-based surrogate models. The idea of this adaptation is to limit as much

Application

To illustrate the advantages of the approach presented in Section 3, let us consider the following problem, which is defined by the N=3 performance functions g1,g2,g3, such that for all x=(x1,x2) in [0,1]2:{g1(x)=((x10.6)20.12(x20.5)20.152)((x10.3)20.452+(x20.5)20.42)1,g2(x)=(x10.55)20.352+(x20.5)20.22+(x10.55)(x20.5)0.321,g3(x)={I1(x)×((x10.7)20.22+(x20.7)20.152(x10.7)(x20.7)0.22)+I2(x)×((x10.2)20.22+(x20.4)20.252)1},whereI1(x)={1if(x10.7)2+(x20.7)2(x10.2)2+(x20.4)2,0

Conclusions

For the last decade, the use of a surrogate model for the conception and the reliability evaluation of complex systems has kept increasing. Indeed, it is a very powerful tool to reduce the computational costs associated with the estimation of the limits of a system at a given precision. When these limits are characterized by performance functions, such surrogate models are generally based on the introduction of selection criteria to iteratively choose the new points to be evaluated to improve

References (19)

There are more references available in the full text version of this article.

Cited by (38)

View all citing articles on Scopus
View full text