Elsevier

Systems & Control Letters

Volume 123, January 2019, Pages 108-115
Systems & Control Letters

Feedback polynomial filtering and control of non-Gaussian linear time-varying systems

https://doi.org/10.1016/j.sysconle.2018.11.004Get rights and content

Abstract

This paper deals with the optimal filtering and optimal output-feedback control of discrete-time, linear time-varying non-Gaussian systems. In the hypothesis that the time-varying and non-Gaussian distributions of the state and measurement noises have bounded and known moments up to a given order, this work extends previous results about polynomial filtering and optimal control to the time-varying case. The properties of the resulting filtering and control algorithms are discussed in the light of a stable recursive representation of the Kronecker powers of the system obtained through a suitable rewriting of the system with an output injection term. The resulting sub-optimal algorithm inherits the structure and the properties of the classical LQG approach but with enhanced performance.

Introduction

The output-feedback optimal control problem with quadratic cost for discrete-time linear system is a classical problem in control engineering [1]. For linear Gaussian systems, it is solved by coupling the optimal estimator, in this case the Kalman Filter (KF), with the optimal state-feedback control obtained by solving a Riccati equation. In many applications, the Gaussian assumption is not satisfied. In particular, non-Gaussian problems often arise in digital communications when the noise interference is essentially non-Gaussian [2], in problems concerning fault estimation [3], sensor or actuator faults [4], intermittent observations [5], biological or financial models with multiplicative noises [6], [7], [8]. In these cases, the KF is not the optimal estimator and the control obtained by coupling its prediction with an optimal linear controller is the optimal linear control. Many approaches have addressed the filtering problem for non-Gaussian systems. They include Particle Filters (PF), based on Monte Carlo methods [9], sums of Gaussian densities [10] and Unscented Kalman Filter (UKF) [11] among others. These general solutions can cope with nonlinearities and/or with the presence of noise outliers or unknown parameters [12], and they generally have high computational cost.

In this paper, we extend the polynomial filtering and control approach for non-Gaussian systems [13], [14], [15] to time-varying systems and we show how the approach can be applied to the optimal control problem of non-Gaussian time-varying systems. The essential idea behind polynomial filtering dates back to [16], [17] and consists in estimating the system’s state as the projection onto the linear space of polynomial functions of the output. As it is well known, in the Gaussian case the conditional expectation is recursively computed by the KF as the projection of the state onto the space of linear functions of the output. When the system is non-Gaussian the conditional expectation generally is not a linear function of the output and the KF is suboptimal. Intuitively, projecting the state onto a larger space yields a better estimate that tends to the optimal one as the space tends to the set of measurable functions of measurements. Polynomial filters project onto the linear space of polynomial functions of the output. Roughly speaking, this is obtained as a projection onto the space of linear functions of powers of the output – thus the basic structure of a recursive linear filter is not changed – and it requires to augment the system with the powers of the state. The original approach in [16] suffers of the drawback that the resulting augmented system, because of the multiplicative noise, does not have a bounded noise covariance when the original system is not asymptotically stable. For this reason the improvement of the control based on the resulting polynomial predictor [18], [19] with respect to the linear case is guaranteed only for asymptotic stable systems. More recent contributions [14], [15] have addressed the issue by suitably separating the deterministic and stochastic components of the state and by re-writing the stochastic part with an output injection terms in order to guarantee the internal stability of the augmented system and, consequently, of the filter. In this work, we adopt the same approach and we extend the method to the time-varying case. In the time-varying case, it is not straightforward to have the exponential stability of the augmented system when the time-varying output injection gain stabilizes the original time-varying system. This is needed in order to prove that with the output injection the state and the noises of the extended system are second-order asymptotically bounded. A somewhat unexpected known result of the theory of polynomial filter/controllers is that, when the system is rewritten with an output injection gain, the performance depends on the gain [15]. This is clearly not the case for linear filters/controllers. In the time-invariant case one can attempt to identify the output injection gain that yields the best performance. In the time-varying case this is clearly not possible and one is left with the problem of choosing a reasonable output injection gain. Thus, we present a simple method to design this output injection gain.

Finally, this extension is important as many of the applications of non-Gaussian systems mentioned above concern time-varying systems. Moreover, nonlinear systems are often well approximated by linear time-varying systems and may benefit from the polynomial filtering approach.

The paper is organized as follows. We recall basic results about polynomial filtering for the time-varying case in Section 2. Polynomial filtering and control for time-varying systems are presented in Section 3. Section 4 presents an application to the control problem of switching linear non-Gaussian systems.

Notation. In a probability space (Ω,F,P) let E denote the expectation. If X and Y are two random variables with the same distribution we write XY. If X is a Gaussian random variable with mean μ and variance σ2 we write XN(μ,σ2). We denote with Π|M the orthogonal projection onto a given closed subspace M of a given Hilbert space. Let A be a matrix, then A is its transpose, A is the induced norm, tr{A} is its trace and (A)r,s denotes the element of the row r and column s. Given some scalars v1,,vn, then v=col(v1,,vn) denotes the column vector v=[v1,,vn]. The Kronecker product of ARn×m and BRk×l is AB=a11Ba1mBan1BanmBRnk×ml.The ith Kronecker power of A is A[i], st{A} denotes the vectorization (or stack) function of A and st1{} its inverse. See [20], Chapter 12 for details on matrices and Kronecker Algebra. We denote with In,i the identity matrix in Rni×ni, In the identity matrix in Rn×n and with I the identity matrix of appropriate dimension when it is clear from the context. If XRn, then X(1:p) with p<n denotes the vector in Rp of the first p entries of X. Throughout the paper we will use the symbol 0 to denote the zero scalar, vector or matrix of appropriate dimension.

Section snippets

Problem formulation and preliminaries

We consider the class of linear time-varying, detectable and stabilizable systems driven by non-Gaussian additive noise described by the following equations: xk+1=Akxk+Bkuk+Fkωk,yk=Ckxk+Gkωk. For k0, xkRn is the system state, ωkRr is a stochastic non-Gaussian noise, ukRp is the control input, ykRq is the measurement output, and matrices Ak, Bk, Ck, Fk, and Gk are of appropriate dimensions. We assume that these matrices are uniformly bounded. As usual, for any k0, the covariance matrix GkGk

Polynomial LTV filtering

It is well known that the minimum variance estimate in the class of affine transformations of y0,y1,,yk is given by the Kalman filter (see [21]). The recursive polynomial estimate of (1)–(2) belongs to the set P¯ysk(ν) and can be expressed as an affine transformation of the zero-mean vector colY0s,Y1s,,Yks where Yksys(k)yks[2]Gk[2]Ψω,2yks[3]Gk[3]Ψω,3yks[ν]Gk[ν]Ψω,ν.It is in fact sufficient to focus on the stochastic system (11)–(13), with the initial condition x0s=x0x̄0, to compute (14).

Numerical example

In this section we consider a system of the form (1)–(2) where the matrices of the model Sk=(Ak,Bk,Ck,Fk,Gk) switch every 5 time steps from version S(1) to version S(2) and vice versa . In particular, we have A(1)=1.1010.7,A(2)=0.50.250.20.9, B(1)=10,B(2)=01,C(1)=10,C(2)=01;matrices F(i) and G(i), i=1,2 are such that the state and output noise sequences F(i)ωk=colf1,k(i),f2,k(i),G(i)ωk=gk(i),i=1,2are i.i.d. with probability mass functions shown in Table 1, where β>0 is a scaling factor; the

Conclusions

The suboptimal algorithm presented in this paper inherits the structure of the classical LQG approach to the optimal output-feedback problem of linear system and in particular the computation of the controller gain is the same as in the Gaussian case. The improvement in performance is based on a more accurate filtering algorithm and is of course dependent on how much “non-Gaussian” is the system. On the theoretical side it would be interesting to investigate whether the proposed scheme tends to

References (25)

  • ArulampalamM.S. et al.

    A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking

    IEEE Trans. Signal Process.

    (2002)
  • ArasaratnamI. et al.

    Discrete-time nonlinear filtering algorithms using gauss–hermite quadrature

    Proc. IEEE

    (2007)
  • Cited by (16)

    • Lyapunov differential equations and inequalities for stability and stabilization of linear time-varying systems

      2021, Automatica
      Citation Excerpt :

      Due to the complexity and difficulty of time-varying systems, the stability analysis problem of LTV systems has been listed as one of the open problems in control theory (Aeyels & Peuteman, 1999). In recent years, although some progress has been made in stability analysis and stabilization of time-varying systems (see Cacace, Conte, Angelo, & Germani, 2019; Hernandez, Vazquez, Fridman, & Perunicic-Drazenovic, 2020; Mazenc, Ahmed, & Malisoff, 2020; Vrabel, 2020a; Zhang, Jia, & Du, 2020 and the references therein), it still cannot be said that they have been completely resolved and there are still some open problems (Aeyels & Peuteman, 1999; Loria, Chaillet, Besancn, & Chitour, 2005). Lyapunov method is one of the most effective methods for dealing with stability analysis and stabilization problems (Kalman & Bertram, 1960).

    • Parameter estimation for a class of time-varying systems with the invariant matrix

      2023, International Journal of Robust and Nonlinear Control
    • Incremental nonlinear stability analysis of stochastic systems perturbed by Lévy noise

      2022, International Journal of Robust and Nonlinear Control
    View all citing articles on Scopus
    View full text