Moving horizon estimation for ARMAX processes with additive output noise
Introduction
The Auto-Regressive-Moving-Average with eXogenous input (ARMAX) model is a discrete time-series model widely used in control engineering for both system description and control design [1], [2], [3], [4], [5]. ARMAX models exhibit an excellent balance between complexity and performance in describing a large class of real processes [6], [7], but do not consider the presence of measurement errors on the output of the process, which may cause adverse impacts in applications like filtering and fault diagnosis [8], [9].
The estimation problem for the ARMAX process usually resorts to a Kalman filter [2], [3], [10], [11], which is the best linear estimator in the sense that no other linear filter gives a smaller variance of the estimation error [12], [13]. It is well-known that the Kalman filter has a Gaussian maximum likelihood interpretation [14], and followed this line of thought, studies also carried out in non-Gaussian noise settings. In [15], a maximum likelihood estimator using the most recent N measurements was developed for the ARMAX process with t-distribution noise. Because the ‘thick tail’ of the t-distribution reduces the effect of large errors in the likelihood function, the estimator is claimed to be robust to outliers. Similar work can be found in [16], where the same methodology was applied to design a filter for the ARMAX process with generalized t-distribution noise. However, these works only concern the ARMAX process with perfectly measured outputs, which may be non-practical in many applications as measurement errors are not taken into account.
From a structural point of view, the output of an ARMAX process is the sum of three regressions concerning past input, output and white noise sequences. This structure has made the estimation problem for the ARMAX process very challenging because a single measurement error may affect multiple data entries thereby significantly affecting the accuracy of estimates. Despite the wide use and proven usefulness of this scheme, the assumptions concerning the exact knowledge of the output are non-realistic in many practical contexts because of the obvious presence of measurement errors. To address this issue, extended ARMAX models that introduce also a description of the measurement errors have been proposed and the problem of their identification has been studied [8], [9]. However, the methods to tackle the specific estimation and filtering problems for such models are yet to be explored. The introducing of additional error terms on the output of the process makes the estimation problem ill-posed, and hence additional information is required to solve the problem.
Regularization in simple terms is a process of introducing additional information in order to solve ill-posed problems or to prevent overfitting. It has found wide application in a variety of research fields including image processing [17], [18], compressive sensing [19], [20], [21] and machine learning [22], [23]. In control engineering, regularization techniques have been successfully applied to problems such as system identification [24], [25], parameter estimation [26] and state smoothing [27], [28], [29]. By using standard ℓ1-norm regularization technique, the authors in [27] showed that their regularized least-squares algorithm was able to simultaneously handling measurement outliers and accounting for possibly correlated nominal noise. Following the same methodology, the authors in [28] developed robust smoothers for dynamical processes that are contaminated with outliers in both measurements and state dynamics. The problem studied in [29] is similar to that in [28], and sum-of-norms regularization was employed to induce group sparsity in the smoothing problem for a generic linear system where the process model is affected by impulsive disturbances. These works add extra regularization terms in the cost function that can be thought of as corresponding to a soft constraint on the estimates, thereby leveraging additional information such as error sparsity.
In our previous work [30], a regularized moving horizon estimator was proposed for ARMAX processes with measurements that contain outliers. Auxiliary variables were introduced to model the occasionally appeared large errors and ℓ1-norm regularization was employed to exploit the degree of sparsity. Because of the explicit modeling of outlier vectors, the impact of an outlier on multiple time instants can be estimated and mitigated. Therefore, more robust and accurate estimates were obtained. Although the ℓ1-regularized approach exhibits good performance in rejecting outliers, the underlying optimization problem does not have an analytical solution and hence no stability result is available for the developed estimator. Besides, the sparse error assumption assumed in the problem is not consistent with the commonly encountered zero-mean white noises and the ℓ1-regularized approach may become computationally inefficient for non-sparse cases [31]. In this paper, we address a more general problem in which the measurement errors are assumed to have zero means and unknown (but bounded) variances. A moving horizon estimator based on ℓ2-norm regularization is derived to achieve this goal, followed by the convergence results and unbiasedness properties. Compared to ℓ1-norm regularization, ℓ2-norm regularization is computationally efficient due to having analytical solutions. Moreover, it produces non-sparse results, which is more consistent with the fact that measurement errors are generally non-sparse. The advantages and challenges of moving horizon estimation with ℓ1- and ℓ2-norm objectives are also discussed in [32].
The estimation task is achieved through the minimization of a quadratic cost function that consists of three terms. The first two terms follow the convention [33], [34], [35]: the first term penalizes the deviation between the current estimated state from its a priori prediction and the second term penalizes the prediction error computed based on the batch of latest measurements. The third term weighted by a positive scalar parameter is a regularization term penalizing the auxiliary variables that model the unknown measurement errors. The scalar parameter enables one to tune the amount of regularization and hence suppression of output noise. The estimation error dynamics of such an estimator will converge to a steady state that is asymptotically stable. If the a priori prediction for the initial state is unbiased, then the estimator is unbiased at each time step; otherwise, the estimator is asymptotically unbiased. Examples demonstrate that the proposed method can effectively cope with additive output noise as well as outliers and outperforms the commonly used Kalman filter. This is the first time that a moving horizon estimator is derived for ARMAX processes with additive output noise.
The rest of the paper is organized as follows. Section 2 introduces the structure of ARMAX processes and reviews the preliminary knowledge for moving horizon estimation. Section 3 develops the moving horizon estimator for ARMAX processes with additive output noise and reports also its convergence results and unbiasedness properties. Simulation results are presented and discussed in Section 4, while the conclusions are drawn in Section 5.
Section snippets
Preliminaries
The ARMAX process and moving horizon estimation were already introduced in the literature [1], [2], [3], [10] and [33], [36], [37], [38]. In this section, we only give the information necessary for deriving the regularized moving horizon estimator with additive output noise assumption in Section 3.
Moving horizon estimation for ARMAX processes with additive output noise
In this section, we explicitly model the measurement errors by auxiliary variables vk as introduced in Eq. (2), and propose a moving horizon estimator for ARMAX processes with additive output noise. Before proceeding further, we first define the vectors that contain the batch of latest noisy measurements for the sliding window at time k as follows:Then, according to Eq. (2) the following relations can be obtainedNext, substituting Eqs.
Examples
In this section, examples are given to show the effectiveness of the proposed moving horizon estimator (MHE). For easy reference, the estimation algorithm is summarized in Algorithm 1, and the whole process is repeated when a new measurement arrives. It should be noted that the state estimate computed from the standard least-squares MHE in Eq. (9) coincides with the Kalman filter state estimate at time k, and a proof can be found in [47]. Here, we only present the results from the Kalman
Conclusion
In this paper, a method to perform moving-horizon estimation for ARMAX processes with additive output noise were presented. The output noise as well as outliers were modeled as auxiliary variables and simultaneously estimated with states using ℓ2-norm regularization. Because of the explicit modeling of output noise, the impact of a measurement error on multiple time instants can be estimated and mitigated, leading to a more precise estimate. Analytical solution is derived for the proposed MHE,
References (59)
- et al.
Identification of ARMAX models with additive output noise
IFAC Proc. Vol.
(2009) - et al.
Identification of ARMAX models with noisy input and output
IFAC Proc. Vol.
(2011) - et al.
Elastic-net regularization in learning theory
J. Complex.
(2009) - et al.
Segmentation of ARX-models using sum-of-norms regularization
Automatica
(2010) - et al.
Smoothed state estimates under abrupt changes using sum-of-norms regularization
Automatica
(2012) - et al.
Moving-horizon estimation with guaranteed robustness for discrete-time linear systems and measurements subject to outliers
Automatica
(2016) - et al.
Constrained linear state estimation–a moving horizon approach
Automatica
(2001) On least-squares identification of ARMAX models
IFAC Proc. Vol.
(2002)- et al.
ARMA model identification from noisy observations based on a two-step errors-in-variables approach
IFAC PapersOnLine
(2017) - et al.
Set membership identification of nonlinear systems
Automatica
(2004)
Moving-horizon state estimation for nonlinear discrete-time systems: new stability results and approximation schemes
Automatica
Fast linear iterations for distributed averaging
Systems Control Lett.
Distributed average consensus with least-mean-square deviation
J. Parallel Distrib. Comput.
Constrained generalised minimum variance controller design using projection-based recurrent neural network
IET Control Theory Appl.
Computer-Controlled Systems: Theory and Design
Adaptive Filtering Prediction and Control
Finite horizon MPC for systems in innovation form
Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference
The Statistical Theory of Linear Systems
System Identification: Theory for the User
System Identification
Time Series Analysis: Forecasting and Control
Introduction to Time Series and Forecasting
Optimal Filtering
Lessons in Estimation Theory for Signal Processing, Communications, and Control
Least-squares estimation: from Gauss to Kalman
IEEE Spectrum
Filtering of the ARMAX process with generalized t-distribution noise: the influence function approach
Ind. Eng. Chem. Res.
Super-resolution with sparse mixing estimators
IEEE Trans. Image Process.
The split Bregman method for L1-regularized problems
SIAM J. Imaging Sci.
Cited by (17)
Estimation of ARMAX processes with noise corrupted output signal observations
2023, Journal of the Franklin InstituteMoving horizon estimation meets multi-sensor information fusion: Development, opportunities and challenges
2020, Information FusionCitation Excerpt :The MH estimation subject to outliers has been firstly investigated in [2], where a special leave-one-out MH estimation strategy has been developed to remove the measurements possibly contaminated by outliers. On the other hand, by modeling the underlying plant as the Autoregressive Moving Average with Exogenous Inputs (ARMAX) process, an MH estimator, which is not only robust to outliers but also capable of estimating outliers, has been proposed in [141,142]. Examples have shown that the developed MH estimation scheme could perform better than the well-known Kalman filter.
H<inf>2</inf> control and filtering of discrete-time LPV systems exploring statistical information of the time-varying parameters
2020, Journal of the Franklin InstituteCitation Excerpt :The motivation of the techniques proposed in this paper comes precisely from the area known as gain-scheduled subject to inexact parameters, where traditionally the noise affecting the scheduling parameters is incorporated in the LPV model as additional arbitrarily fast time-varying parameters. As investigated in the preliminary conference version of this paper [22], when the scheduling parameters are obtained by an identification procedure (such as AR [22] or ARMAX [23]), the estimation error can be modeled as an additive bounded noise. In this case the additive noise can be considered as a random time-varying parameter with a known probability distribution function (PDF).
Design and stability of moving horizon estimator for discrete-time linear systems subject to multiple measurement outliers
2024, Transactions of the Institute of Measurement and ControlA filter design for T-S fuzzy systems based on moving horizon estimator with measurement noise
2023, PeerJ Computer ScienceConvergence Analysis of Forgetting Factor Least Squares Algorithm for ARMAX Time-Delay Models
2023, Circuits, Systems, and Signal Processing