Next Article in Journal
Revisiting the Dynamic Response of Chinese Price Level to Crude Oil Price Shocks Based on a Network Analysis Method
Next Article in Special Issue
Waveform Design for Multi-Target Detection Based on Two-Stage Information Criterion
Previous Article in Journal
Uniform Finite Element Error Estimates with Power-Type Asymptotic Constants for Unsteady Navier–Stokes Equations
Previous Article in Special Issue
Low Light Image Enhancement Algorithm Based on Detail Prediction and Attention Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model

School of Science, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(7), 946; https://doi.org/10.3390/e24070946
Submission received: 16 June 2022 / Revised: 1 July 2022 / Accepted: 5 July 2022 / Published: 7 July 2022
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)

Abstract

:
Group sparse coding (GSC) uses the non-local similarity of images as constraints, which can fully exploit the structure and group sparse features of images. However, it only imposes the sparsity on the group coefficients, which limits the effectiveness of reconstructing real images. Low-rank regularized group sparse coding (LR-GSC) reduces this gap by imposing low-rankness on the group sparse coefficients. However, due to the use of non-local similarity, the edges and details of the images are over-smoothed, resulting in the blocking artifact of the images. In this paper, we propose a low-rank matrix restoration model based on sparse coding and dual weighting. In addition, total variation (TV) regularization is integrated into the proposed model to maintain local structure smoothness and edge features. Finally, to solve the problem of the proposed optimization, an optimization method is developed based on the alternating direction method. Extensive experimental results show that the proposed SDWLR-GSC algorithm outperforms state-of-the-art algorithms for image restoration when the images have large and sparse noise, such as salt and pepper noise.

1. Introduction

Sparse representation theory [1,2,3] has always been a very interesting research field due to its good performance in the fields of computer vision and image processing. It can be well used to represent or extract the main features of images. Among existing works, the image denoising methods based on patch sparse coding [4] divide the image into each patch equally, and use the patch’s structure to encode the image into a combination of dictionary and sparse coefficient. By constraining the L 0 norm of coding coefficients, the real coding can be approximately estimated on the noisy image, so as to remove the noise.
On this topic, the dictionary is very important for the performance of these methods and should be learned firstly. Thus, the popular technologies such as PCA [5], K-SVD [6], and S-KSVD [7] are used to train dictionaries, to achieve higher expressive ability. However, they are all large-scale and highly non-convex problems, which often have high computational complexity. On the other hand, patches are units of sparse representation. Each patch is usually considered independently in dictionary learning and sparse coding, which leads to its focus on the local structure of the image and essentially ignores the relationship between similar patches—that is, the non-local self similarity (NSS) of the image [8,9,10,11].
For these non-local image denoising approaches, they provide a new research theory and direction for image denoising. Group sparse coding (GSC) [12], which uses groups instead of a single patch as the basic unit of sparse coding, combines the advantages of local sparsity and NSS of images, and shows great potential in various image processing tasks [13,14]. Similar to the sparse representation based on patches, each image patch group can also be accurately fitted by the sparse linear combination of dictionary atoms. In order to make full use of the similarity between groups, Zha et al. proposed low-rank regulated group sparse coding (LR GSC) [15], which imposed a low-rank constraint on the sparse coefficients of each group. It is the first time that the low-rank property of group sparsity in GSC has been used to ensure that the dictionary domain coefficients are not only sparse but also low-rank. Although the above methods have achieved good results, they are mainly aimed at the removal of Gaussian white noise, and cannot be used to remove the outlier noise effectively. In addition, when the density of noise is large, the acquisition of similar patches will be affected since each image patch also contains these noisy pixels. Figure 1 shows the effect of the LR-GSC model on image restoration from Gaussian noise and outlier noise, respectively. It can be seen that the LR-GSC model is more suitable for removing Gaussian noise than outlier noise.
Because of the two-dimensional structure of a matrix, it can effectively protect the original details and structure information of the images. It is also more robust to data with outliers—for example, salt and pepper noise. Wright et al. [16] proposed the problem of low-rank restoration, which decomposes the original data as the sum of the low-rank matrix and sparse noise matrix. Candes et al. [17] accurately removed large low-rank matrices containing noise samples through kernel norm minimization. Using this theory to denoise the image, the original image structure with low-rank characteristics is separated from the observed noisy image, which has good performance on the removal of impulse and outlier noise.
When the rank or sparsity exceeds the threshold limit, the convex approximation model will not be able to accurately estimate the low-rank solution and sparse solution. For this reason, Candes et al. [18] proposed a weighted L 1 norm method to allocate smaller weights to larger matrix elements, restrain the overshrinkage of the L 1 norm, and improve the accuracy of the sparse solution. Gu et al. [19] proposed the weighted nuclear norm minimization (WNNM) model and the WNNM-based RPCA model (WNNM-RPCA) [20]. The model uses the weighted nuclear norm to relax the rank function. Considering the importance of different rank components, they assign weights according to the size of singular values to control the penalization of different rank components, so as to retain more important rank components and improve the accuracy of low-rank solutions. Peng et al. [21] proposed a double-weighted model, which simultaneously weighted the sparse and low-rank terms in the RPCA model, and was combined with the reweighted method to allocate weights in the iterative algorithm. The model can improve the accuracy of the sparse solution and low-rank solution at the same time. Although the low-rank theory performs well in removing impulse or outlier noise, it is mainly based on local similarity—that is, only the overall low-rank structure is considered.
Therefore, in order to overcome the above shortcomings, we propose a smoothing dual-weighted low-rank group sparse coding (SDWLR-GSC) algorithm to remove impulse or outlier noise. The main contributions are as follows:
(1) In addition to using the non-local self-similarity prior to the image, as with LR-GSC, to maintain the low rank of similar patch coefficients when constructing similar patches, we also impose a low-rank constraint on the reconstructed similar patches as a whole. We not only consider the low-rank property between similar patches, but also consider local structural smoothness. Based on this, we combine the model with the TV norm to further strengthen the local structural smoothness of the matrix. Figure 2 shows an example of image recovery from dense noise based on using and not using the TV norm.
(2) Because the low rank of the coefficients of similar patches and the global low rank of the reconstruction matrix of similar patches are both considered in the objective function, the optimization problem becomes a challenging non-convex optimization problem. In order to solve this problem effectively, we develop a new numerical solution based on the inexact augmented Lagrange multiplier (IALM) and non-uniform singular value threshold (NSVT). The experimental results show that the proposed method can improve the quality of the restored image significantly.
The remainder of this paper is organized as follows. Section 2 reviews the related work on low-rank regularized group sparse coding (LR-GSC). Section 3 presents our proposed SDWLR-GSC model, and Section 4 shows the solution of our proposed method. The experimental results are presented in Section 5. Finally, Section 6 summarizes this paper.

2. Related Work

In this section, we will review the work related to our proposed method.

2.1. Low-Rank Regularized Group Sparse Coding

In GSC [13,14], the method is mainly divided into two steps: grouping similar patches, and then sparsely representing the grouped patches, which can be formulated as the following problem,
A ^ i = a r g m i n A i ( 1 2 X i D i A i F 2 + λ A i 1 ) i
where X i R b × m is a group composed of patches with similar structure, and  D i is the dictionary learned from each group of X i . · F and · 1 denote the Frobenius norm and L 1 norm, respectively. Thus, the optimal A ^ i is the sparse codes of X i with λ 0 .
Zha et al. [15] proposed LR-GSC, which combines the low-rank characteristics of group coefficients in GSC, to further utilize the similarity between groups; the dictionary domain coefficients of each group are constrained as not only sparse but also low-rank. Taking the sparsity penalty and low-rank penalty as dual regularizers, the following problem is obtained:
{ A ^ i , B ^ i } = a r g m i n A i , B i 1 2 X i D i A i F 2 + λ A i 1 + 1 η A i B i F 2 + τ B i i ,
where · 1 is applied for the sparsity penalty, and the nuclear norm · is applied for the low-rank penalty. τ is a non-negative constant, and  η is a balancing factor in making (2) more feasible. A low-rank approximation B i is jointly estimated for each group sparse matrix A i . Similar to GSC, the optimal sparse codes { A ^ i } i = 1 n are used to restore the latent image.

2.2. Dual-Weighted Low-Rank Matrix Recovery

The authors in [22] proved that the RPCA model can be transformed into solving the following convex optimization problem by minimizing the combined decomposition matrix of the L 1 norm and nuclear norm, i.e.,
m i n X , S X + λ S 1 , s . t . X + S = Y ,
where Y M m × n is the observed matrix. X is the nuclear norm of matrix X, which is defined as the sum of matrix singular values, i.e.,  X = i = 1 r σ i ( X ) , r = m i n ( m , n ) , σ i ( X ) is the i-th singular value of matrix X. S 1 is the L 1 norm of matrix S, which is defined as the sum of matrix absolute elements—that is, S 1 = i = 1 m j = 1 n | s i , j | , s i , j represents the elements in matrix S. λ > 0 is the regularization parameter.
In the process of solving, the model can easily lead to overshrinkage, which affects the accuracy of the solution. In [21], the sparse term and low-rank term in the RPCA model are weighted at the same time, and a double-weighted model is proposed, i.e.,
m i n X , S X Ω , + λ W S 1 , s . t . X + S = Y ,
where X Ω , = i = 1 r w i σ i ( X ) , W S 1 = i = 1 m j = 1 n w i , j | s i , j | , s i , j represents the elements in matrix S, W R m × n is the weight matrix. The weight of s i , j is defined as w i , j = 1 / ( | s i , j | + ε ) .

3. Formulation of the Proposed Method

Low-rank regularized group sparse coding (LR-GSC) [15] ensures that the dictionary domain coefficients of each group are not only sparse but also low-rank. However, when the density of outlier noise becomes higher, the method cannot recover the image well. In this section, inspired by the low-rank matrix restoration algorithm in [21,23], we propose a new group sparse coding model, called SDWLR-GSC, which introduces total variation (TV) regularization into low-rank group sparse coding to realize image structure smoothing. At the same time, the dual-weighted model is used to recover the clean image better from the large and sparse noise. Our new method can be formulated as follows:
m i n X i , S i , A i j = 1 q w X i , j · σ ˜ j + θ W S i S i 1 + β X i T V + 1 2 ρ i = 1 n Y i D i A i F 2 + λ i = 1 n A i 1 + τ i = 1 n A i s . t . X i B l , u { x i , j , l x i , j u } , Y ^ i = X i + S i .
where Y i R b × m represents the matrix constructed by the batches of similar patches for each basic patch y i . Y ^ i is calculated by multiplying D ^ i and A ^ i . D ^ i and A ^ i are the dictionary learned from each group Y i and the sparse representation of each group X i under a given dictionary, respectively. W X i = { w X i , j } and W S i R m × n are weights of { σ ˜ j } and S i . { σ ˜ j } are singular values of matrix X i . The constraint B l , u states that pixel values are bounded—for example, [0, 255]. · T V denotes the TV norm. In [24], total variation is proposed as a penalty term to deal with image restoration. The advantage of TV regularization is that it can well restore the edge and eliminate the noise.
Apparently, the proposed SDWLR-GSC, based on LR-GSC, exploits the sparsity and low-rankness of the dictionary domain coefficients of each group at the same time. For the image patches reconstructed by the given dictionary and the sparse coding, we further introduce the dual-weighted model to weight low-rank terms and sparse terms, and we add the TV norm to maintain the smoothing structure of images. Therefore, the proposed model can better separate the clean image from the sparse noise and achieve better reconstruction results. Figure 3 shows the flowchart of the proposed SDWLR-GSC by taking the simple image denoising as an example.

4. The Proposed Solution

In this section, to solve the proposed SDWLR-GSC problem, we will present our solution method based on the alternating direction method of multipliers, which separates the multi-variable optimal problem into several single-variable problems, so that these variables can be undated one by one in each iteration.
First, to address the problem more easily, we introduce an auxiliary variable as follows:
m i n H i , X i , S i , A i , B i j = 1 q w H i , j · σ j + θ W S i S i 1 + β X i T V + 1 2 ρ i = 1 n Y i D i A i F 2 + λ i = 1 n A i 1 + τ i = 1 n B i s . t . X i B l , u { x i , j , l x i , j u } , Y ^ i = X i + S i , H i = X i , B i = A i .
where W H i = { w H i , j } are weights for { σ ˜ j } , { σ ˜ j } are singular values of matrix H i , and  w H i , j = w X i , j , j = 1 , , n .
In this way, the augmented Lagrange function of (7) can be obtained as follows:
f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = j = 1 n w H i , j · σ j + θ W S i S i 1 + β X i T V + 1 2 ρ i = 1 m Y i D i A i F 2 + λ i = 1 n A i 1 + 1 2 η i = 1 m A i B i F 2 + τ i = 1 n B i + Y 1 , Y i H i S i + Y 2 , X i H i + μ 2 Y i H i S i F 2 + X i H i F 2 s . t . X i B l , u { x i , j , l x i , j u } .
Then, we use the iterative alternating direction method, which optimizes one variable and fixes the remaining optimization variables in an iterative way. In this way, the original complex multi-variable optimization problem can be simplified to a single-variable optimization problem, and its solution can be obtained analytically. In our problem, there are seven sub-problems, i.e.,  H i , X i , S i , A i , B i , Y 1 , Y 2 sub-problems. Next, we will give a detailed implementation for each of them.

4.1. H i Sub-Problem

If we fix the variables ( X i , S i , A i , B i , Y 1 , Y 2 ) , the variable H i can be solved by minimizing f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) . Specifically,
a r g m i n H i f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = a r g m i n H i j = 1 n w H i , j · σ j + Y 1 , Y i H i S i + Y 2 , X i H i + μ 2 Y i H i S i F 2 + X i H i F 2 = a r g m i n H i j = 1 n w H i , j · σ j + μ H i 1 2 ( Y i + X i S i + Y 1 / μ + Y 2 / μ ) F 2 = a r g m i n H i j = 1 n w H i , j · σ j + μ H i L F 2 ,
where L = 1 2 ( Y i + X i S i + Y 1 / μ + Y 2 / μ ) . This minimization problem (8) can be solved by non-uniform singular value thresholding (NSVT) [23] as follows:
H = D ( 2 μ ) 1 W H ( L ) .
where D ϵ ( Q ) = U S ϵ ( Σ ) V T , U Σ V T is the singular value decomposition of matrix Q. S ϵ ( · ) denotes non-uniform soft thresholding operator [25], and its ( i , j ) -th element is m a x ( | q i j | ϵ , 0 ) s g n ( q i j ) , where the parameter ϵ > 0 .

4.2. X i Sub-Problem

If the variables ( H i , S i , A i , B i , Y 1 , Y 2 ) are fixed, we can optimize X i by minimizing f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) . Specifically,
a r g m i n X i f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = a r g m i n X i β X i T V + Y 2 , X i H i + μ 2 X i H i F 2 = a r g m i n X i β X i T V + μ 2 X i ( H i Y 2 / μ ) F 2 = a r g m i n X i β X i T V + μ 2 X i R F 2 ,
where R = ( H i Y 2 / μ ) , X i B l , u { x i , j , l x i , j u } . Similar to the constrained convex problem represented by the image denoising problem using the TV norm in [23], if  μ and β are given, γ can be obtained by β / μ . Then, the solution of problem (10) is given by:
X i = P B l , u ( R γ L ( p , q ) ) .
where ( p , q ) , L, P are the matrix pairs, linear operator, and orthogonal projection operator, respectively. In addition, the bounds of the constraint are set to be [ l , u ] = [0, 255].

4.3. S i Sub-Problem

Similar to other variables, if we fix the variables ( H i , X i , A i , B i , Y 1 , Y 2 ) , the variable S i can be updated by the optimal solution of minimizing f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) .
a r g m i n S i f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = a r g m i n S i θ W S i S i 1 + Y 1 , Y i H i S i + μ 2 Y i H i S i F 2 = a r g m i n S i θ W S i S i 1 + μ 2 S i ( Y i H i + Y 2 / μ ) F 2 ,
This minimization problem (12) can be solved by non-uniform soft thresholding (NST) [25] as follows:
S = S θ μ 1 W S ( Y H + Y 1 / μ ) .

4.4. A i Sub-Problem

If we fix the variables ( H i , S i , X i , B i , Y 1 , Y 2 ) , the  A i sub-problem can be reduced to the following optimal problem.
a r g m i n A i f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = 1 2 ρ i = 1 n Y i D i A i F 2 + 1 2 η i = 1 n A i B i F 2 + λ i = 1 n A i 1
We use the principal component analysis (PCA) of each group to learn the grouping sub-dictionary D i  [26]. Since the learned PCA dictionary is singular, Equation (14) is equivalent to the following problem.
a r g m i n A i f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = a r g m i n A i 1 2 ρ i = 1 n G i A i F 2 + 1 2 η i = 1 n A i B i F 2 + λ i = 1 n A i 1 = a r g m i n A i 1 2 i = 1 n P i A i F 2 + v i = 1 n A i 1 ,
where Y i D i G i , P i η G i + ρ B i η + ρ and v = ρ η λ . Then, this minimization problem (15) can be solved by non-uniform soft thresholding (NST) [25] as follows:
S = S v ( P i ) i .

4.5. B i Sub-Problem

The same as the above methods, if we want to find the updating formula of B i , the variables ( H i , S i , X i , A i , Y 1 , Y 2 ) should also be fixed firstly, and then the solution of the variable B i can be obtained by minimizing f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) . Specifically,
a r g m i n B i f ( H i , X i , S i , A i , B i , Y 1 , Y 2 ) = a r g m i n B i 1 2 η A i B i F 2 + τ B i
This minimization problem (17) can be solved by non-uniform singular value thresholding (NSVT) [23] as follows:
B i = D η τ ( A i ) .

4.6. Y 1 and Y 2 Sub-Problems

Finally, Y 1 and Y 2 are Lagrange multiplier matrices of the original optimization problem. They should be updated after other variables. If  Y 1 is unknown and other variables are fixed, Y 1 can be updated as follows:
Y 1 = Y 1 + μ ( Y H S ) .
If Y 2 is unknown and other variables are fixed, Y 2 can be updated as follows:
Y 2 = Y 2 + μ ( X H ) .
In Algorithm 1, we summarize the complete algorithm of SDWLR-GSC for image restoration.
Algorithm 1 Smoothed and Dual-Weighted Low-Rank Group Sparse Coding (SDWLR-GSC).
Require: The degraded image y R m × n (assuming m n );
Output: The final restored image X ^ ( K ) ;
1:  Initialization:  x ^ ( 0 ) = y , A i ( 0 ) = 0 , B i ( 0 ) = 0 ;
2:  Set parameters b, m, ρ , η , λ , τ and Max-Iter;
3:  for  k = 1  to Max-Iter do
4:    Iterative regularization y ( k ) = x ^ ( k 1 ) + δ ( y y ^ ( k ) )
5:    Divide y ( k ) into a set of overlapping patches with size b × b
6:    for each patch y i in y ( k )  do
7:       Find non-local similar patches to form a group Y i
8:       Construct dictionary D i by Y i using PCA
9:       Update A i by computing A i = D i T Y i
10:     Perform [ U i , Δ i , V i ] = S V D ( A i )
11:     Estimate B ^ i by computing Equation (18).
12:     Estimate A ^ i by computing Equation (16).
13:     Update Y ^ i by computing Y ^ i = D i A i
14:     Use Algorithm 2 to Y ^ i to estimate X i
15:   end for
16:   Aggregate X i to form the clean image X ^ ( k )
17: end for
Algorithm 2 Smoothed and Dual-Weighted Model for Image Denoising.
Require: non-local similar patched group Y i R p × q (assuming p q );
Output: solutions X i ( k ) = X i ( t + 1 ) ;
1:   Initialization:  W X i ( 0 ) = 1 R q , W S i ( 0 ) = 1 · 1 T R p × q , ( X i ( 0 ) , S i ( 0 ) ) R p × q , H ( 0 ) R p × q , Y 1 ( 0 ) R p × q , Y 2 ( 0 ) R p × q ;
2:   Set parameters μ 0 > 0 , ξ = 10 7 , k = 0 , λ , δ and inneriter;
3:   while Y i X i S i F / Y i F > ξ and t < i n n e r i t e r  do
4:       Let L ( t + 1 ) = Y i + X i ( t ) S i ( t ) + Y 1 ( t ) / μ ( t ) , ρ = η / μ ( t ) , H i ( t + 1 ) = D μ ( t ) 1 W X i ( L ( t + 1 ) )
5:       Let R ( t + 1 ) = H ( t + 1 ) Y 2 ( t ) using the FGP Algorithm [23] to compute X i ( t + 1 ) =
    P B l , u ( R ( t + 1 ) ρ L ( p , q ) )
6:        S ( t + 1 ) = S λ μ ( t ) 1 W S i ( Y i H ( t + 1 ) + Y 1 ( t ) / μ ( t ) )
7:        Y 1 ( t + 1 ) = Y 1 ( t ) + μ ( t ) ( Y i H ( t + 1 ) S i ( t + 1 ) )
8:        Y 2 ( t + 1 ) = Y 2 ( t ) ( X i ( t + 1 ) H ( t + 1 ) )
9:        μ ( t + 1 ) = δ μ ( t ) , t t + 1
10:      Update weights  [27]: The weights for each i = 1 , , p and j = 1 , , q are updated
            by
                                           w X i , j ( k + 1 ) = 1 σ j ( k ) + ϵ X i , w S i , i j ( k + 1 ) = 1 | S i i j ( k ) | + ϵ S i .
             where ϵ X i and ϵ S i are predetermined positive constants, and the singular value
             matrix
                                        Σ ( k ) = d i a g σ 1 ( k ) , , σ n ( k ) R n × n
             with [ U ( k ) , Σ ( k ) , V ( k ) ] = s v d ( X i ( k ) )
11: end while

5. Results

To verify the performance of the SDWLR-GSC algorithm in image restoration, we carried out a large number of experiments and compared its performance with the most novel methods based on the low-rank matrix. We also selected 16 classical images (Lena, Barbara, Couple, House, Monarch are from the Set12 dataset. Frame, Road, Bridge, Elaine, Pentagon, Lin are from Google Images. Flower is from the Kodak24 dataset. Monkey, Tank are from the USC-SIPI image dataset http://sipi.usc.edu/database/ (accessed on 28 March 2022)) with size 512 × 512 as the test dataset. The images used in all experiments are shown in Figure 4. This experiment mainly aimed at outlier noise with high density. Outlier noise includes two types: salt and pepper noise and random-valued sparse additive noise. Thus, we introduced two classes of numerical experiments. First, the test image was only destroyed by different levels of large and sparse additive noise. Second, the test image was only destroyed by different levels of salt and pepper noise. The peak signal to noise ratio (PSNR) [28] and structural similarity index metric (SSIM) [27] were used as quality evaluation indicators to evaluate performance. The larger the value of PSNR and SSIM, the better the image quality. All experiments were carried out on MATLAB R2020a running on Windows 10. The CPU was Intel Core i7-2600, the memory was 8 GB, and the frequency was 3.40 GHz.

5.1. Parameter Setting

The parameter settings involved in this algorithm are as follows. The size of the input matrix is m × n . In Algorithm 1, we set A i ( 0 ) = 0 , B i ( 0 ) = 0 . In Algorithm 2, we set X i ( 0 ) = 0 , S i ( 0 ) = 0 , H i ( 0 ) = 0 . Following the practice in [29] and our tests, we set the Lagrange multiplier matrices Y 1 ( 0 ) = M / max ( M , λ 1 M ) , and Y 2 ( 0 ) = 0 , where M is the spectral norm of matrix M and M is the maximum absolute value of the entries in matrix M. In addition, the constants ϵ X i and ϵ S i in Algorithm 2 are set to 0.1, the same as in [21].
In addition, we set θ in (13), which controls the sensitivity of the model to coefficient errors to be 1.25 / max ( m , n ) , and we set β in (11), which controls the sensitivity of the model to the TV norm to be 10 8 / max ( m , n ) . Finally, in Algorithm 2, γ can be computed by γ = η / β , and μ corresponds to parameter μ ( t ) in Algorithm 2, and we set δ = 0.1 .
The above parameters can be set according to experience. However, in Algorithm 1, the size of patch b, the number of non-local similar patches m, balancing factors ρ and η in (15), regularization parameters λ in (15) and τ in (17), and other parameters are determined by parameter experiments.
In the parameter experiments of patch size b and patch number m, the value range of b is multiples of 2 from 8 to 64, the value range of m is 100∼180, and the sampling interval is 20. Figure 5 shows the influence of the values of b and m on the restoration results when the noise density p = 0.3 . Therefore, according to the experiment, when the noise density is p = 0.3, we set the size of image blocks to 64 × 64 and look for 160 similar image patches. In addition, through experiments, when p = 0.2, p = 0.4, we set patch sizes to 60 × 60 and 70 × 70 , and the number of non-local similar patches selected is 160 and 190.
It can be seen that when the noise density increases, the size of the image patch also increases. When the noise density increases, the original similar patches may be different from each other, so expanding the size of similar patches and increasing the number of similar patches can improve the accuracy of the algorithm.
In addition, we analyze the parameter experiments of balancing factors ρ and η , and regularization parameters λ and τ . Figure 6 shows that when the noise density p = 0.3 , the influence of the value of this parameter on the restoration result when other parameters are fixed is noticeable. We set ρ = 1 , η = 0.1 , λ = 0.02 , τ = 0.5 when the noise density p = 0.3 .
In Algorithm 1, the maximum number of iterations m a x i t e r , and in Algorithm 2, the internal maximum number of iterations i n n e r i t e r , are parameters related to the convergence of the algorithm. Because SDWLR-GSC is a non-convex model, it is difficult to prove the global convergence of the algorithm in theory. Therefore, this paper analyzes the influence of iteration times on the restoration results through experiments. Through experiments, m a x i t e r = 2 and i n n e r i t e r = 100 are selected to avoid unnecessary iterative calculation.

5.2. Large Sparse Additive Noise Removal

We first test the experimental results of the proposed SDWLR-GSC for removing large and sparse additive noise. We suppose that the ratio of damaged pixels to all pixels in the image is p, and the values of these noisy pixels are 255, and a noisy observation image is generated. In this test, we give the denoising results of three noise levels, namely p = 0.2 , p = 0.3 , and p = 0.4 . There are several other parameters in the algorithm, which are set as follows. In Algorithm 1, the number of iterations k and the patch size are set according to the noise level. For higher noise levels, we choose larger patches and run more iterations. According to experiments, let the ratio of the number of damaged pixels to the number of all pixels be p, and set patch sizes to 60 × 60 , 64 × 64 , and 70 × 70 for p = 0.2 , p = 0.3 , and p = 0.4 , respectively. At these noise levels, the number of non-local similar patches selected is 160, 160, and 190, respectively.
We compare the proposed SDWLR-GSC with the most advanced existing methods, including the PCP algorithm [30], reweighted L 1 algorithm [31], NSVT method [21], and SRLRMR algorithm [29]. We also compare it with the LR-GSC algorithm [15]. It can be seen from Table 1 that under all noise levels, compared with other competitive methods, the SDWLR-GSC proposed by us achieves higher peak signal-to-noise ratio results. It can be seen from Table 1 that under all noise levels, compared with other competitive methods, the SDWLR-GSC proposed by us achieves higher PSNR results. In addition, the LR-GSC is more suitable for removing Gaussian noise than large and sparse noise.
When p = 0.4 noise probability, on average, the PSNR performance of this method is 13.11 dB higher than that of PCP, 7.87 dB higher than that of specific gravity reweighted L 1 , 11.97 dB higher than that of NSVT, and 5.09 dB higher than that of SRLRMR. Figure 7 shows the visual comparison results of an image (Lena) when the noise density is p = 0.2 . It can be observed that the PCP, reweighted L 1 , and NSVT methods cannot completely restore the damaged image, while the SRLRMR method can better complete the image denoising task, but it is slightly lacking in structure and the edge is still blurred. In contrast, our proposed SDWLR-GSC method not only effectively eliminates noise, but also retains sharp edges and fine details.

5.3. Salt and Pepper Noise Removal

In this subsection, we apply the proposed SDWLR-GSC to remove salt and pepper noise—that is, we add the noise with a pixel value of 0 on the basis of the noise in the previous section, and then the ratio of damaged pixels’ total number with pixel values of 0 and 255 to all pixels’ number is p. Similarly, in this test, we still give three different noise levels of p = 0.2 , p = 0.3 , and p = 0.4 . We compare this model with LR-GSC [15], PCP [30], the WNNM-RPCA model [20], the WSNM-RPCA model [32], WSNM- L 1 [33], and the DWLP model [34].
Table 2 quantitatively compares the denoising results of various methods under different salt and pepper noise probability p. It can be seen that when p = 0.2, p = 0.3, p = 0.4, the PSNR of our model is higher than that of LR-GSC, PCP, WNNM-RPCA, WSNM- L 1 , and DWLP. Moreover, Figure 8 shows the visual comparison results of an image (House) when the noise density is p = 0.3 .
Through the experiments, we can draw several conclusions: firstly, the strong salt and pepper noise destroys the sparse prior and low-rank of the image, and the restoration performance of the PCP model is poor; secondly, the PSNR of our model is higher than the average level of other models, which shows that using the dual-weighted model in group sparse coding, processing low-rank components and sparse components at the same time, and introducing the TV norm to solve the problem of image structure smoothing can reconstruct the low-rank structure of images more accurately.

6. Conclusions

In this paper, we constructed a smooth dual-weighted low-rank group sparse coding model. It combined group sparse coding, the TV norm, and a dual-weighted model. We also proved the superior performance of the proposed method in image denoising. Experimental results show that this method is obviously superior to the original PCP optimization, reweighted L 1 norm minimization, and NSVT and SRLRMR algorithms in removing large and sparse additive noise, and it is obviously superior to the PCP optimization, WNNM-RPCA, WSNM-RPCA, WSNM- L 1 , and DWLP methods in removing salt and pepper noise.
Although our proposed method has good performance, there is still room for further improvement. In this paper, the algorithm solver is based on the alternating direction method, due to the matrix–matrix multiplications and matrix inversions, and singular value decomposition is required for each iteration, which has high computational complexity for large matrices. In addition, when the expectation matrix becomes complex—for example, it has a high internal rank structure or the deletion becomes dense—satisfactory performance may not be obtained. Therefore, the question of how to reduce the computational complexity while maintaining high performance will be our research direction in the future. In addition, other applications of the proposed method are another important issue for future work.

Author Contributions

Conceptualization, J.W. and H.W.; methodology, J.W. and H.W.; software, J.W.; validation, J.W. and H.W.; formal analysis, H.W.; investigation, H.W.; resources, H.W.; data curation, J.W.; writing—original draft preparation, J.W.; writing—review and editing, J.W. and H.W.; visualization, J.W.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 41971396, 61971290; the Research Ability Enhancement Program for Young Teachers of Beijing University of Civil Engineering and Architecture, grant number X21024; the Outstanding Youth Program of Beijing University of Civil Engineering and Architecture, and the Beijing Municipal Education Commission Science and Technology General Project, grant number KM202110016001.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Nos. 41971396, 61971290), the Research Ability Enhancement Program for Young Teachers of Beijing University of Civil Engineering and Architecture (No. X21024), the Outstanding Youth Program of Beijing University of Civil Engineering and Architecture, and the Beijing Municipal Education Commission Science and Technology General Project (No. KM202110016001).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elad, M.; Elad, M. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Wright, J.; Yi, M.; Mairal, J.; Sapiro, G.; Huang, T.S.; Yan, S. Sparse representation for computer vision and pattern recognition. Proc. IEEE 2010, 98, 1031–1044. [Google Scholar] [CrossRef] [Green Version]
  3. Elad, M.; Figueiredo, M.A.; Ma, Y. On the role of sparse and redundant representations in image processing. Proc. IEEE 2010, 98, 972–982. [Google Scholar] [CrossRef]
  4. Xu, J.; Lei, Z.; Zuo, W.; Zhang, D. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 244–252. [Google Scholar]
  5. Jolliffe, I.T. Principal component analysis. J. Mark. Res. 2002, 87, 513. [Google Scholar]
  6. Aharon, M.; Elad, M.; Bruckstein, A. rmK-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  7. Rubinstein, R.; Zibulevsky, M.; Elad, M. Double sparsity: Learning sparse dictionaries for sparse signal approximation. IEEE Trans. Signal Process. 2010, 58, 1553–1564. [Google Scholar] [CrossRef]
  8. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision (ICCV), Kyoto, Japan, 27 September–4 October 2009. [Google Scholar]
  9. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc. 2014, 23, 3336. [Google Scholar] [CrossRef] [Green Version]
  10. Dong, W.; Shi, G.; Ma, Y.; Li, X. Image restoration via simultaneous sparse coding: Where structured sparsity meets gaussian scale mixture. Int. J. Comput. Vis. 2015, 114, 217–232. [Google Scholar] [CrossRef]
  11. Wang, Q.; Zhang, X.; Wu, Y.; Tang, L.; Zha, Z. Non-convex weighted lp minimization based group sparse representation framework for image denoising. IEEE Signal Process. Lett. 2017, 24, 1686–1690. [Google Scholar] [CrossRef]
  12. Liu, J.; Yang, W.; Zhang, X.; Guo, Z. Retrieval compensated group structured sparsity for image super-resolution. IEEE Trans. Multimed. 2016, 19, 302–316. [Google Scholar] [CrossRef]
  13. Yu, H.; Gao, L.; Liao, W.; Zhang, B.; Zhuang, L.; Song, M.; Chanussot, J. Global spatial and local spectral similarity-based manifold learning group sparse representation for hyperspectral imagery classification. IEEE Trans. Geosci. Remote. Sens. 2020, 58, 3043–3056. [Google Scholar] [CrossRef]
  14. Xu, J.; Qiao, Y.; Fu, Z.; Wen, Q. Image block compressive sensing reconstruction via group-based sparse representation and nonlocal total variation. Circuits Syst. Signal Process. 2019, 38, 304–328. [Google Scholar] [CrossRef]
  15. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Reconciliation Of Group Sparsity And Low-Rank Models For Image Restoration. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020. [Google Scholar]
  16. Wright, J.; Ganesh, A.; Rao, S.; Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices. In NIPS’09: Proceedings of the 22nd International Conference on Neural Information Processing Systems Red Hook, NY, USA, 7–9 December 2009; Curran Associates Inc.: Red Hook, NY, USA, 2009; pp. 2080–2088. [Google Scholar]
  17. Candès, E.J.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar]
  18. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar]
  19. Gu, S.; Lei, Z.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  20. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
  21. Peng, Y.; Suo, J.; Dai, Q.; Xu, W. Reweighted low-rank matrix recovery and its application in image restoration. IEEE Trans. Cybern. 2014, 44, 2418–2430. [Google Scholar] [CrossRef]
  22. Candes, E.J.; Li, X.; Ma, Y.; Wright, J.A. Robust principal component analysis? J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
  23. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [Green Version]
  24. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  25. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 2002, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  26. Dong, W.; Lei, Z.; Shi, G. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc. 2013, 22, 1620–1630. [Google Scholar] [CrossRef] [Green Version]
  27. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2014, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  28. Ni, Z.; Shi, Y.Q.; Ansari, N.; Wei, S. Reversible data hiding. IEEE Trans. Circuits Syst. Video Technol. 2006, 16, 354–362. [Google Scholar]
  29. Wang, H.; Cen, Y.; He, Z.; He, Z.; Zhao, R.; Zhang, F. Reweighted low-rank matrix analysis with structural smoothness for image denoising. IEEE Trans. Image Process. 2018, 27, 1777–1792. [Google Scholar] [CrossRef] [PubMed]
  30. Lin, Z.; Chen, M.; Ma, Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
  31. Yue, D.; Dai, Q.; Liu, R.; Zhang, Z.; Hu, S. Low-rank structure learning via nonconvex heuristic recovery. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 383–396. [Google Scholar]
  32. Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral image restoration via iteratively regularized weighted schatten p-norm minimization. IEEE Trans. Geosci. Remote. Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
  33. Chen, G.; Wang, J.; Zhang, F.; Wang, W. Image denoising in impulsive noise via weighted schatten p-norm regularization. IEEE Trans. Image Process. 2019, 28, 1. [Google Scholar] [CrossRef]
  34. Dong, H.; Yu, J.; Xiao, C. Dual reweighted lp-norm minimization for salt-and-pepper noise removal. arXiv 2018, arXiv:1811.09173. [Google Scholar]
Figure 1. (a) The Lena image corrupted by Gaussian noise. (b) Restoration result of (a) based on LR-GSC. (c) The Lena image corrupted by outlier noise. (d) Restoration result of (c) based on LR-GSC.
Figure 1. (a) The Lena image corrupted by Gaussian noise. (b) Restoration result of (a) based on LR-GSC. (c) The Lena image corrupted by outlier noise. (d) Restoration result of (c) based on LR-GSC.
Entropy 24 00946 g001
Figure 2. Comparison of the denoising results with TV norm and without; 30% of pixels of Barbara image are corrupted by large and sparse noise. (a) The corrupted image. (b) PSNR values for Barbara image corrupted by large and sparse noise with different density. (c) Restored by the model of dual-weighted low-rank group sparse coding without TV norm. PSNR = 30.34 dB. (d) Restored by the model of dual-weighted low-rank group sparse coding with TV norm. PSNR = 31.08 dB.
Figure 2. Comparison of the denoising results with TV norm and without; 30% of pixels of Barbara image are corrupted by large and sparse noise. (a) The corrupted image. (b) PSNR values for Barbara image corrupted by large and sparse noise with different density. (c) Restored by the model of dual-weighted low-rank group sparse coding without TV norm. PSNR = 30.34 dB. (d) Restored by the model of dual-weighted low-rank group sparse coding with TV norm. PSNR = 31.08 dB.
Entropy 24 00946 g002
Figure 3. Flowchart of the proposed SDWLR-GSC model for image denoising. On the basis of LR-GSC [15], we applied the dual-weighted model to the reconstructed similar patches, and introduced the TV norm to maintain the smoothness of the image structure. Finally, the optimal sparse codes are used to estimate the clean patch groups for constructing the restored image. (Firstly, the corrupted image is extracted from non-local similar patches through a block matching operator. Secondly, patches with similar structures are grouped to perform dictionary learning to obtain group coefficients. At the same time, the group coefficients remain sparse and low-rank concurrently).
Figure 3. Flowchart of the proposed SDWLR-GSC model for image denoising. On the basis of LR-GSC [15], we applied the dual-weighted model to the reconstructed similar patches, and introduced the TV norm to maintain the smoothness of the image structure. Finally, the optimal sparse codes are used to estimate the clean patch groups for constructing the restored image. (Firstly, the corrupted image is extracted from non-local similar patches through a block matching operator. Secondly, patches with similar structures are grouped to perform dictionary learning to obtain group coefficients. At the same time, the group coefficients remain sparse and low-rank concurrently).
Entropy 24 00946 g003
Figure 4. Test images in our experiments. First row: Lena, Barbara, Goldhill, Frame, Road, Bridge, Couple, Monkey. Second row: Boat, Elaine, Flower, House, Lin, Monarch, Pentagon, Tank.
Figure 4. Test images in our experiments. First row: Lena, Barbara, Goldhill, Frame, Road, Bridge, Couple, Monkey. Second row: Boat, Elaine, Flower, House, Lin, Monarch, Pentagon, Tank.
Entropy 24 00946 g004
Figure 5. The influence of the values of b and m on the restoration results when the noise density p = 0.3 .
Figure 5. The influence of the values of b and m on the restoration results when the noise density p = 0.3 .
Entropy 24 00946 g005
Figure 6. The influence of the values of ρ , η , λ , and τ on the restoration results when the noise density p = 0.3 .
Figure 6. The influence of the values of ρ , η , λ , and τ on the restoration results when the noise density p = 0.3 .
Entropy 24 00946 g006
Figure 7. Restoration results comparison of the single image, Lena, of the PCP, reweighted l1, NSVT, SRLRMR, and our method. Here, 20% image pixels are corrupted by large and sparse noise. (a) The original Lena image. (b) The large and sparse noise. (c) The corrupted image (12.24 dB). (d) Restoration result by PCP (26.75 dB). (e) Restoration result by reweighted l 1 norm minimization (23.84 dB). (f) Restoration result by NSVT (32.15 dB). (g) Restoration result by SRLRMR (32.47 dB). (h) Restoration result by our method (36.76 dB).
Figure 7. Restoration results comparison of the single image, Lena, of the PCP, reweighted l1, NSVT, SRLRMR, and our method. Here, 20% image pixels are corrupted by large and sparse noise. (a) The original Lena image. (b) The large and sparse noise. (c) The corrupted image (12.24 dB). (d) Restoration result by PCP (26.75 dB). (e) Restoration result by reweighted l 1 norm minimization (23.84 dB). (f) Restoration result by NSVT (32.15 dB). (g) Restoration result by SRLRMR (32.47 dB). (h) Restoration result by our method (36.76 dB).
Entropy 24 00946 g007
Figure 8. Restoration results comparison of the image House of the PCP, WNNM-RPCA, WSNM-RPCA, WSNM- l 1 , DWLP, and our method. Here, 30% image pixels are corrupted by salt and pepper noise. (a) The original Lena image, and the image size is 512 × 512. (b) The input corrupted image (10.69 dB). (c) Restoration result by PCP (28.88dB). (d) Restoration result by WNNM-RPCA (29.51 dB). (e) Restoration result by WSNM-RPCA (29.77 dB). (f) Restoration result by WSNM-RPCA l 1 norm minimization (28.15dB). (g) Restoration result by DWLP (33.89 dB). (h) Restoration result by our method (38.08 dB).
Figure 8. Restoration results comparison of the image House of the PCP, WNNM-RPCA, WSNM-RPCA, WSNM- l 1 , DWLP, and our method. Here, 30% image pixels are corrupted by salt and pepper noise. (a) The original Lena image, and the image size is 512 × 512. (b) The input corrupted image (10.69 dB). (c) Restoration result by PCP (28.88dB). (d) Restoration result by WNNM-RPCA (29.51 dB). (e) Restoration result by WSNM-RPCA (29.77 dB). (f) Restoration result by WSNM-RPCA l 1 norm minimization (28.15dB). (g) Restoration result by DWLP (33.89 dB). (h) Restoration result by our method (38.08 dB).
Entropy 24 00946 g008
Table 1. Comparison of different denoising methods under different large sparse noise probabilities in terms of PSNR.
Table 1. Comparison of different denoising methods under different large sparse noise probabilities in terms of PSNR.
p ValueImage N o . PSNR (dB)
LR-GSCPCPReweight l 1 NSVTSRLRMRSDWLR-GSC (Ours)
p = 0.2Lena12.2526.7523.8432.1532.4736.76
Barbara11.7424.2322.3326.2826.3634.56
Goldhill11.5628.4126.7531.9933.7134.48
Frame9.6325.9022.7928.1029.6430.95
Road10.0129.7726.4334.6334.8834.50
Bridge10.1729.9827.9634.0133.7334.69
Couple12.1925.7123.4928.7829.3131.18
Monkey12.6721.7220.4322.1223.7125.48
Boat12.8625.2223.0127.6528.3532.23
Average11.4526.4124.1129.5230.2432.76
p = 0.3Lena10.4821.2722.1723.8430.6733.31
Barbara9.9719.1020.8520.7724.3531.08
Goldhill9.8023.0025.4225.5631.1531.91
Frame7.8622.5223.0426.6127.3629.31
Road8.2425.2026.6928.7333.3133.18
Bridge8.4127.2926.5730.5733.0133.09
Couple10.4420.9222.0122.9327.6028.18
Monkey10.9118.0919.2118.1221.9022.93
Boat11.1221.2521.6222.0127.6628.74
Average9.6922.0723.0624.3528.5630.19
p = 0.4Lena9.2213.1816.5313.5720.5327.44
Barbara8.7112.2515.1012.5017.8824.43
Goldhill8.5413.0419.3913.8021.6627.48
Frame7.6112.3219.6115.6922.5826.26
Road6.9811.3918.6513.2120.0429.87
Bridge7.1413.9424.4918.0428.1232.15
Couple9.1813.2817.2413.5020.5523.60
Monkey9.6512.6515.0012.0617.0718.68
Boat9.8613.8217.0313.8419.6424.01
Average8.5412.8818.1214.0220.9025.99
Table 2. Comparison of different denoising methods under different salt and pepper probabilities in terms of PSNR.
Table 2. Comparison of different denoising methods under different salt and pepper probabilities in terms of PSNR.
p ValueImagePSNR (dB)
LR-GSCPCPWNNM-RPCAWSNM-RPCAWSNM-L1DWLPSDWLR-GS (Ours)
p = 0.2Couple12.6026.3426.0326.2925.0831.5731.73
Elaine12.5830.3530.2430.4828.6436.3842.21
Flower12.6427.9427.4627.6726.5931.7841.30
Goldhill12.4628.6428.0028.1626.7932.9734.09
House12.5630.2931.3731.4329.8437.1946.21
Lin12.0129.6129.5129.5426.7233.0438.52
Monarch12.1524.5126.3826.5525.1629.5341.30
Pentagon12.7225.8425.5125.6624.2931.2031.83
Tank12.9131.6730.1230.2329.4935.9234.57
Average12.5128.3528.2928.4526.9632.2937.97
p = 0.3Couple10.8824.5024.4124.6423.9128.7729.68
Elaine10.8028.4228.4428.6527.1533.4638.77
Flower10.9127.1126.2826.4325.3829.2936.25
Goldhill10.7127.4326.5628.1625.6330.3031.65
House10.7628.8829.5129.7728.1533.8938.08
Lin10.2927.2227.8027.9326.1830.5836.03
Monarch10.4022.8424.2724.4723.6126.3936.82
Pentagon10.9524.1523.9724.1823.1628.9329.45
Tank11.1430.0529.5729.6428.5733.2132.73
Average10.7626.7326.7627.1025.7530.5434.97
p = 0.4Couple9.6023.4323.7723.8122.6226.5728.35
Elaine9.5525.5426.6027.0325.1830.7135.94
Flower9.6425.7825.6825.6024.2328.0333.43
Goldhill9.4424.7925.4025.9224.1228.6830.22
House9.5224.8327.9027.9225.9731.0434.60
Lin9.0225.4326.1126.1523.7127.2333.97
Monarch9.1421.3422.9823.2222.2224.3234.42
Pentagon9.7322.3423.0422.9621.9526.1327.97
Tank9.9027.5729.1229.1927.2931.9431.70
Average9.5024.5625.6225.7624.1428.2932.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, J.; Wang, H. Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model. Entropy 2022, 24, 946. https://doi.org/10.3390/e24070946

AMA Style

Wu J, Wang H. Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model. Entropy. 2022; 24(7):946. https://doi.org/10.3390/e24070946

Chicago/Turabian Style

Wu, Jiawei, and Hengyou Wang. 2022. "Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model" Entropy 24, no. 7: 946. https://doi.org/10.3390/e24070946

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop