Elsevier

Signal Processing

Volume 90, Issue 9, September 2010, Pages 2792-2799
Signal Processing

Fast communication
Variable step-size normalized LMS algorithm by approximating correlation matrix of estimation error

https://doi.org/10.1016/j.sigpro.2010.03.027Get rights and content

Abstract

In this letter, we propose a variable step-size normalized least mean square (NLMS) algorithm. We study the relationship among the NLMS, recursive least square and Kalman filter algorithms. Based on the relationship, we derive an equation to determine the step-size of NLMS algorithm at each time instant. In steady state, the convergence of the proposed algorithm is verified by using the equation, which describes the relationship among the mean-square error, excess mean-square error, and measurement noise variance. Through computer simulation results, we verify the performance of the proposed algorithm and the change in the variable step-size over iterations.

Introduction

The normalized least mean square (NLMS) algorithm is an adaptive filter algorithm that is simple and easy to implement. There have been several studies on improving its performance [1]. A variable step-size is one of the improvements suggested for the NLMS algorithm [2], [3].

In this letter, we propose a variable step-size NLMS algorithm and the motivation of the proposed algorithm is the state-space approach for adaptive filter algorithms [4], [5]. According to this approach, the adaptive filter algorithm can be derived by state-space equations; the NLMS algorithm is a special case of the state-space approach for adaptive filter algorithms. The relationship between the recursive least square (RLS) algorithm and the Kalman filter has been studied in [6]. We summarize the relationships and develop the proposed variable step-size NLMS algorithm by considering the relationship among the NLMS, RLS and Kalman filter algorithms.

Conventionally, most variable step-size algorithms are derived by minimizing a criterion or cost function to determine the step-size value [3], [5]. In contrast, the proposed variable step-size algorithm is derived by approximating the correlation matrix of the estimation error. It does not require a differentiation operation to minimize the criterion for step-size. Moreover, the step-size calculation of the proposed algorithm is simple, and therefore, it does not pose a serious computational burden. The convergence of the proposed algorithm is confirmed by using the relationship that the summation of the excess mean-square error and variance of the measurement noise is equal to the mean-square error [7].

This letter is organized as follows. In Section 2, we summarize the relationship among the NLMS, RLS, and Kalman filter algorithms. In Section 3, we present the proposed variable step-size algorithm and verify the convergence. In Section 4, we show computer simulation results of the proposed algorithm and compare them to the variable step-size algorithm in [3], [5] to verify the performance. In Section 5, we conclude this letter.

Section snippets

Relationship among NLMS, RLS, and Kalman filter algorithms

A state-space equation without input force is given byxi+1=Aixi+wiyi=Cixi+viwhere xi, yi, wi and vi are the state, measurement, process noise, and measurement noise vectors, respectively, at time instant i [6]. We assume that all vectors are column vectors. The matrices Ai and Ci are the state transition and measurement matrices, respectively, at time instant i [6]. The Kalman filter is an algorithm to estimate the state vector when process and measurement noise exist. For i=1,2,…, the Kalman

Derivation of proposed algorithm

As stated in the previous section, the NLMS algorithm assumes Pi=I for all time instants i. This assumption is a rough approximation of Pi. For a better approximation of Pi, we set Pi=diag(λi), where diag(λi) is a diagonal matrix whose diagonal terms are all λi. We try to calculate λi at each time instant i. Although our treatment is relatively poor compared to the RLS algorithm, it provides a better solution compared to the NLMS algorithm. Moreover, the proposed treatment has less

Simulation results

The simulation is executed under a channel estimation scenario. We randomly generate an optimal coefficient vector ho for estimation. For the input signals, we use filtered zero-mean white Gaussian random signals through the AR modelG(z)=110.9z1The signal-to-noise ratio (SNR) was defined as 10 log10 (E[y2(i)]/E[v2(i)]), and we set the SNR for the simulations to 30 dB. To evaluate the performance, the mean square deviation (MSD) defined as Ehoh^i2 is calculated by averaging over 100

Conclusion

In this letter, we propose a variable step-size NLMS algorithm. The proposed algorithm is developed based on the relationship among the NLMS, RLS and Kalman filter algorithms. We suggest the correlation matrix of the estimation error to be a diagonal matrix having same diagonal terms, which is a better approximation compared to the approximation of the identity matrix adopted in the standard NLMS. We also propose an equation for determining the same diagonal terms by using an equation from the

Acknowledgments

This research was supported by the Ministry of Knowledge Economy (MKE), Korea, under the Information Technology Research Center (ITRC) support program supervised by the National IT Industry Promotion Agency (NIPA) (NIPA-2009-C1090-0902-0004).

References (9)

There are more references available in the full text version of this article.

Cited by (15)

  • Improved subband-forward algorithm for acoustic noise reduction and speech quality enhancement

    2016, Applied Soft Computing Journal
    Citation Excerpt :

    Furthermore, in the same direction, many variable step-size adaptive filtering algorithms for speech enhancement application have been also proposed [23–38,1,39]. For example, in references [23–26], variable step size LMS (VSS-LMS) algorithms were proposed to improve the performance of the fixed step size LMS algorithm by using particular recursive formulas. Others variable step-size normalized least mean square (NLMS) algorithms have been proposed to resolve the problem of the conventional NLMS algorithm in the steady state regime when a degradation of the final mean square error (MSE) occurs [27–29].

  • A noise-resilient affine projection algorithm and its convergence analysis

    2016, Signal Processing
    Citation Excerpt :

    The normalized least mean square (NLMS) algorithm is one of the most widely used adaptive algorithms because of its low computational complexity and ease of implementation [2]. To improve the convergence performance of the conventional NLMS algorithm, many researchers have proposed various approaches such as variable step-size (VSS) [3,4], variable regularization (VR) [5,6], set-membership filtering (SMF) [7] and partial-update [8]. The improved NLMS algorithms employing these schemes have achieved fast convergence speeds and small steady-state errors.

  • Two-channel variable-step-size forward-and-backward adaptive algorithms for acoustic noise reduction and speech enhancement

    2015, Signal Processing
    Citation Excerpt :

    For example, in NR-SE applications, there is an interest to control the step-size values to get more robustness against non-stationary of the inputs and also to achieve enough noise reduction and less distortion of the output signals. Recently, many variable-step-size (VSS) adaptive algorithms have been proposed [25–34]. In [27], VSS LMS (VSS-LMS) algorithms are proposed to improve performance over the FSS algorithm.

View all citing articles on Scopus
View full text