Online dictionary learning algorithm with periodic updates and its application to image denoising

https://doi.org/10.1016/j.eswa.2013.11.036Get rights and content

Highlights

  • Dictionary learning for sparse modeling improves performance in various applications.

  • We introduce a coefficient update procedure for batch and online dictionary learning algorithms.

  • We propose a new coefficient updated version of the MOD dictionary learning algorithm.

  • We present a new online, periodically coefficient updated dictionary learning algorithm.

  • Simulations demonstrate the improvement provided by the coefficient update procedure.

Abstract

We introduce a coefficient update procedure into existing batch and online dictionary learning algorithms. We first propose an algorithm which is a coefficient updated version of the Method of Optimal Directions (MOD) dictionary learning algorithm (DLA). The MOD algorithm with coefficient updates presents a computationally expensive dictionary learning iteration with high convergence rate. Secondly, we present a periodically coefficient updated version of the online Recursive Least Squares (RLS)-DLA, where the data is used sequentially to gradually improve the learned dictionary. The developed algorithm provides a periodical update improvement over the RLS-DLA, and we call it as the Periodically Updated RLS Estimate (PURE) algorithm for dictionary learning. The performance of the proposed DLAs in synthetic dictionary learning and image denoising settings demonstrates that the coefficient update procedure improves the dictionary learning ability.

Introduction

Sparse signal representation in overcomplete dictionaries has acquired considerable interest (Kim et al., 2011, Plumbley et al., 2010, Rubinstein et al., 2010). Sparse signal representation constitutes compactly expressing a signal as a linear combination from an overcomplete set of signals or atoms. The number of atoms utilized in the linear combination is much less than the signal dimensionality, hence the sparse designation. The set of all atoms forms the redundant dictionary over which sparse representations are realized. There are a plethora of methods for sparse representation of a signal over a given dictionary (Tropp & Wright, 2010). One class of algorithms includes linear programming based optimization methods (Chen, Donoho, & Saunders, 1998). Another important class of algorithms contain the greedy methods, e.g., Orthogonal Matching Pursuit (OMP) (Tropp & Gilbert, 2007), which present computationally practical solutions to the sparse representation problem.

A subject related to sparse representation is dictionary learning (Gribonval and Schnass, 2010, Rubinstein et al., 2010, Toˇsić and Frossard, 2011, Yaghoobi et al., 2009), which considers the construction of the dictionary employed for sparse coding of data. Dictionary learning examines the problem of training the atoms of a dictionary suitable for the joint sparse representation of a data set. Dictionary learning algorithms (DLAs) include Maximum Likelihood (ML) methods (Olshausen & Field, 1997), Maximum a posteriori Probability (MAP)-based methods (Kreutz-Delgado et al., 2003), the K-Singular Value Decomposition (K-SVD) algorithm (Aharon, Elad, & Bruckstein, 2006), direct optimization based methods such as Rakotomamonjy (2013) and the least-squares based Method of Optimal Directions (MOD) (Engan et al., 1999, Engan et al., 2007). Other recent approaches to the dictionary learning problem include (Sadeghi et al., 2013, Smith and Elad, 2013).

In general the previously listed methods are batch algorithms, and they process the entire data set as a batch for each iteration. Recently, online DLAs have been proposed, where the algorithm allows sequential dictionary learning as the data flows in. The online algorithms include the Recursive Least Squares (RLS)-DLA (Skretting & Engan, 2010), which is derived using an approach similar to the RLS algorithm employed in adaptive filtering. The RLS approach has also been used for sparse adaptive filtering in recent studies (Babadi et al., 2010, Eksioglu and Tanc, 2011). Another online DLA is the Online Dictionary Learning (ODL) algorithm of Mairal, Bach, Ponce, and Sapiro (2010).

In this paper we introduce a new DLA, which is based on the least squares solution for the dictionary estimate as is the case for the MOD algorithm and the RLS-DLA. We first present a variant of the MOD algorithm where the sparse coefficients associated with the previously seen signals are recalculated at every iteration before the dictionary is updated. This variant has much higher computational complexity than the MOD algorithm. We regularize this computationally expensive variant by restricting the recalculation to periodic updates. The resulting algorithm which we call as the PURE algorithm is developed by augmenting the RLS-DLA algorithm with periodic updates of the sparse representations before the dictionary estimate is formed. The PURE algorithm presents performance better than the RLS-DLA, while maintaining the same asymptotic computational complexity as the RLS-DLA and MOD algorithms. Simulations show that the introduced PURE algorithm works well in the synthetic dictionary reconstruction setting and also in image denoising applications. To the best of our knowledge this work presents the first attempt to introduce a periodic coefficient update into the two-step iterative dictionary learning procedure. Dictionary learning for given data sets results in performance improvement in various applications. These applications include but are not limited to image denoising and reconstruction (Liu et al., 2012, Wang et al., 2013, Yang et al., 2013) and various classification problems (Jiang, Lin, & Davis, 2013). Devising new and better dictionary learning approaches naturally leads to performance improvements in the aforementioned applications.

In the coming sections, we begin first by giving a review of dictionary learning in general, and the MOD and RLS-DLA algorithms. In Section 3 we introduce the coefficient updated version of the MOD algorithm. In Section 4, we develop a new online dictionary learning algorithm by augmenting the RLS-DLA with periodic coefficient updates. Section 5 details the computational complexity of the novel algorithms when compared to the existing methods. In Section 6 we provide detailed simulations for the novel algorithms. The simulation settings include synthetic dictionary recovery and image denoising.

Section snippets

Batch and online dictionary learning algorithms

The dictionary learning problem may be defined as finding the optimally sparsifying dictionary for a given data set. The dictionary learning problem might be formulated using different optimization objectives over a sparsity regularized cost function for a given data set. Aharon et al. (2006) suggests the following expression for constructing a sparsifying dictionary.minD,Wn=1Nxn-Dwn22subject ton,wn0Sor equivalentlyminD,WX-DWF2subject ton,wn0SAnother similar objective for

Least squares dictionary learning algorithm with coefficient update

We present a variation on the MOD algorithm, which presents a time recursion different when compared to the RLS-DLA. The RLS-DLA algorithm as presented in Algorithm 2, finds only the sparse representation for the current data vector xn at Step 5. Hence, the current instantaneous weight matrix Wn is generated by concatenating the previous weight matrix Wn-1 with wn, that is Wn=[Wn-1wn]. We suggest that better convergence in the dictionary update can be achieved if at each time instant also the

RLS-DLA with periodic weight updates: PURE-DLA

At time instant n, the MOD-CU algorithm necessitates solving the sparse representation problem for n data vectors and finding the inverse of a K×K matrix. The RLS-DLA on the other hand requires solving the sparse representation problem only for the current data vector and requires no explicit matrix inversion, because the matrix inverse is calculated using a rank one update similar to the RLS algorithm. Hence, it is tempting to find an algorithm which maintains the relative performance gain of

Computational complexity comparison

In this section we will discuss the computational complexities of the various DLA’s in terms of the number of multiplications required. The common step in all the DLAs is the sparse representation step. If this sparse representation step is realized using OMP, its complexity is O((S2+M)K). S is the sparsity of the data vectors, M is data vector length and K is dictionary size. Assuming S2M and MK, the sparse representation step complexity becomes O(K2). Since sparse representation is the

Simulation results

In this section we present experiments which detail the dictionary learning performance of the introduced algorithms when compared to DLAs from literature. We analyze the dictionary learning performance of the various algorithms under different signal-to-noise ratio (SNR) values. We also examine the performance of the PURE algorithm when utilized in image denoising. In the simulations of this section we have made use of the K-SVD implementations provided by the authors of Aharon et al. (2006).

Conclusions

We have presented a new dictionary learning algorithm for the online dictionary training from data. Firstly, we considered a coefficient update improvement on the MOD algorithm, which we called as MOD-CU algorithm. This method provides a computationally expensive but improved variation on the MOD algorithm. Secondly we propose an algorithm which provides a compromise between the computationally expensive full MOD-CU iteration and the RLS-DLA, and we call this method as the PURE algorithm. The

References (32)

  • S.S. Chen et al.

    Atomic decomposition by basis pursuit

    SIAM Journal on Scientific Computing

    (1998)
  • J.M. Duarte-Carvajalino et al.

    Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization

    IEEE Transactions on Image Processing

    (2009)
  • E.M. Eksioglu et al.

    RLS algorithm with convex regularization

    IEEE Signal Processing Letters

    (2011)
  • M. Elad et al.

    Image denoising via sparse and redundant representations over learned dictionaries

    IEEE Transactions on Image Processing

    (2006)
  • M. Elad et al.

    On the role of sparse and redundant representations in image processing

    Proceedings of the IEEE

    (2010)
  • Engan, K., Aase, S. O., & Husøy, J. H. (1999). Method of optimal directions for frame design. In Proceedings of the...
  • Cited by (22)

    • Dictionary learning for multivariate geochemical anomaly detection for mineral exploration targeting

      2022, Journal of Geochemical Exploration
      Citation Excerpt :

      Dictionary learning is suitable for bigdata analysis and has been successfully applied in digital image processing and remote sensing data analysis. For example, digital image denoising (Eksioglu, 2014) and change detection based on remote sensing images (Ferraris et al., 2019). Unlike machine learning and deep learning algorithms, dictionary learning is a “white-box” algorithm that only the number of atoms needs to be set before training the overcomplete dictionary to sparsely represent the training dataset.

    • On the fly image denoising using patch ordering

      2022, Expert Systems with Applications
      Citation Excerpt :

      These methods use the entire data set as a batch for each iteration. On the other hand there are online dictionary learning methods which have been proposed more recently (Eksioglu, 2014; Mairal et al., 2009; Skretting & Engan, 2010). To process the data set sequentially instead of using the entire data set for each iteration is the main characteristic of online dictionary learning methods.

    • Dictionary learning for integration of evidential layers for mineral prospectivity modeling

      2022, Ore Geology Reviews
      Citation Excerpt :

      Dictionary learning is superior to the machine learning and deep learning algorithms in the following aspects: (a) dictionary learning is a “white-box” algorithm involving only linear transformations; and (b) dictionary learning only needs to define empirically the total number of basis vectors (atoms) to constitute the overcomplete dictionary. Dictionary learning have been successfully applied to image denoising (Eksioglu, 2014) and unsupervised change detection between multimodal remote sensing images (Ferraris et al., 2019). The dictionary learning algorithm is to find an overcomplete dictionary that performs well in sparse coding the input dataset (Mairal et al., 2009).

    • A 6-DOFs event-based camera relocalization system by CNN-LSTM and image denoising

      2021, Expert Systems with Applications
      Citation Excerpt :

      Standard dictionaries of the technology include speeded up robust features dictionary (Cummins & Newman, 2011), features from accelerated segment test (FAST) operator and binary robust independent elementary features (BRIEF) descriptor dictionary (Dorian & Juan, 2012), oriented FAST and rotated BRIEF feature dictionary (Raul & Juan, 2014) and so on. With the continuous development of deep learning technology, it is now possible to use deep learning to replace some SLAM modules to achieve better results (Jagvaral, Lee, & Roh, 2020; Masala, Casu, Golosio, & Grosso, 2018; Eksioglu, 2014; Liao, Huang, & Wang, 2017; McCormac, Handa, & Davison, 2017). In recent research, many studies have combined camera relocalization with deep learning.

    • An online support vector machine for the open-ended environment

      2019, Expert Systems with Applications
      Citation Excerpt :

      If the stream data contain new classes, the RPA will allocate a new OIM for each new class. Thus, the RPA can serve as a dynamic database for an expert system, e.g., it can serve as an online dictionary learning system (Eksioglu, 2014), a management database which is dynamically changed according to the application environment (Pohl, Bouchachi, & Hellwagner, 2018). After the new stream data are learned, the updated OIMs will be used to retrain SVM which is used to make decisions, e.g., to filter emails (Sanghani & Kotecha, 2019), to diagnose diseases (Parisi, RaviChandran, & Manaog, 2018).

    View all citing articles on Scopus
    View full text