Abstract

Matrix completion that estimates missing values in visual data is an important topic in computer vision. Most of the recent studies focused on the low rank matrix approximation via the nuclear norm. However, the visual data, such as images, is rich in texture which may not be well approximated by low rank constraint. In this paper, we propose a novel matrix completion method, which combines the nuclear norm with the local geometric regularizer to solve the problem of matrix completion for redundant texture images. And in this paper we mainly consider one of the most commonly graph regularized parameters: the total variation norm which is a widely used measure for enforcing intensity continuity and recovering a piecewise smooth image. The experimental results show that the encouraging results can be obtained by the proposed method on real texture images compared to the state-of-the-art methods.

1. Introduction

The problem of matrix completion, which can be seen as the extension of recently developed compressed sensing (CS) theory [13], plays an important role in the field of signal and image processing [411]. This problem occurs in many real applications in computer vision and pattern recognition, such as image inpainting [12, 13], video denoising [14], and recommender systems [15, 16]. Reconstruction algorithms for matrix completion have received much attention. Cai et al. [17] proposed an algorithm, namely, the singular value thresholding (SVT) algorithm for matrix completion and related nuclear norm minimization problems. In [18], a simple and fast singular value projection (SVP) algorithm for rank minimization with affine constraints is exploited. Keshavan et al. [19] dealt with the matrix completion based on singular value decomposition followed by local manifold optimization. In order to achieve a better approximation of the rank of matrix, Hu et al. [11] presented an approach based on the truncated nuclear norm regularization (TNNR), which is defined by the difference between the nuclear norm and the sum of the largest few singular values. Since most of the existing matrix completion models aim to solve the low rank optimization via nuclear norm, we recall here this model. For an incomplete matrix of rank , the model can be described as follows: where and , and is the set of locations corresponding to the observed entries.

Unfortunately, the rank minimization problem in (1) is an NP-hard one, so the following convex relaxation is widely used: where is the nuclear norm given by where denotes the th largest singular value of .

In this paper, our objective is to exploit the intrinsic geometry of the data distribution and incorporate it as an additional regularization term to deal with the images which are rich in texture. The total variation (TV) norm has demonstrated its usefulness as a graph regularizer in the field of image processing, so we propose here a method that combines the nuclear norm with the linear TV approximate norm to solve the problem of matrix completion. We call it the linear total variation approximate regularized nuclear norm (LTVNN) minimization problem. This combination optimization problem will be solved by simple and efficient optimization scheme based on the alternating direction method of multipliers (ADMM) model [20, 21].

The paper is organized as follows. In the next section, we introduce the proposed LTVNN model and we describe the optimization schemes. In Section 3, we establish the convergence results for the iterations given in Section 2. Experimental results on a set of images are provided in Section 4. Finally, we draw some conclusions in Section 5.

2. Proposed Method

2.1. Some Preliminaries

The total variation along the vertical and horizontal directions can be described as So the total variation of X is the summation for the magnitude of the gradient of each pixel [22]: And the equvalent total variation formula as follows: Here, we use the linear total variation approximate of (7) to approximate the second kind of total variation; that is,

2.2. Proposed Model

As mentioned above, the key point of the proposed approach is the combination of the nuclear norm and the linear total variation approximate norm; therefore, the optimization problem is described as where is a penalty parameter, is the nuclear norm defined in (3), and is linear total variation norm approximate defined in (8), which can be reformulated as where “” means the trace of the matrix, denotes the Frobenius norm of the matrix, and and are, respectively, the column and row transform matrix given by

So, the problem in (9) can be rewritten as

2.3. The Optimization Scheme

The alternating direction method of multipliers-ADMM [20, 21] is an efficient and scalable optimization model which exploits the structure of the optimization problem. In this section, we use ADMM to deal with the problem in (12), which can be reformulated as where and are the indicator functions. The augmented Lagrange function of (13) is where is the penalty parameter and is the multiplier. The solution can be obtained by incorporating the solutions of each regularization problem separately which are defined as follows.Row TV is as follows: where denotes the optimization result along the vertical direction of the total variation defined in (4).Column TV is as follows: where denotes the optimization result along the horizontal direction of the total variation defined in (5).

We deal with column linear TV optimization problem in (16) by the following steps in each iteration.

Step 1 (initial setting). , , , with the tolerance .

Step 2 (computing ). Fix and , and minimize (16) for obtaining as Ignoring the constant terms, (17) can be rewritten as To solve (18), Cai et al. [17] introduce the soft-thresholding operator which is defined as follows: where .
Using the operator in (19), the solution of (18) can be obtained as

Step 3 (computing ). Fix and and calculate as follows: which is a quadratic function of and can be easily solved by setting the derivation of to zeros, and then we get Then we fix the values at the observed entries: where denotes the set of the missing entries.

Step 4 (computing ). Fix and and calculate as follows:
Until the stop condition: .

Row TV problem defined by (15) can be solved in a similar way to that of column TV problem. The only difference is the in the second step, which is given by And the stop condition is .

Finally, we obtained as the average of and ; that is,

3. Convergence Analysis

In this section, we give the proof of the convergence of column total variation (16) and the convergence of row total variation is similar to the column total variation. Here, the objection function (16) about column variation is as follows:

Lemma 1. Let and . Then The details of the proof can be found in [17].

Theorem 2. Assuming that the sequence of step size obeys , and . Here, denotes the optimization result and denotes the th iteration object variable; then by the iteration procedure defined in Section 2.3, we can obtain the unique optimization result, that is, . And the details of the proof can be found in the Appendix.

4. Experiments

In this section, we test the proposed method on a set of images. The algorithm was implemented with MATLAB programming language on a PC machine, which sets up Microsoft Windows 7 operating system and has an Intel Core I5 CPU with speed of 2.79 GHz and 2 GB RAM.

We deal with three channels () of color images separately and combine the results together to get the final outcome. We use peak signal-to-noise ratio (PSNR) values to evaluate the performance: where MSE denotes mean squared error,

In the experiments, we consider two situations: random mask sample and word mask sample. Figure 1 describes the recovered results with 60% random mask and word mask for and 1 by LTVNN. Figure 2 shows the recovered PSNR for Pepper under different random sample ratios and word mask sample for from 0 to 1 with step of 0.1 by LTVNN. It can be observed from these two figures that the best result is obtained for the value of near to 0.5, which corresponds to the case where the two norms (nuclear and LTV) are equivalently used in (9). For the two extreme cases: (only the nuclear norm is taken into consideration) and (only the linear total variation approximate norm is considered), the algorithm loses its efficiency.

We also compare our method (LTVNN) with other matrix completion methods including TNNR [10, 11], SVT [12], SVP [13], and OptSpace [14]. Figure 3 plots the recovered PSNR for Pepper for with different random sample ratios (from 40% to 90%) by LTVNN and other four methods (TNNR, SVT, SVP, and OptSpace). It can be seen from Figure 3 that the proposed LTVNN method achieves much higher PSNR than the other methods. Figure 4 shows the comparison of PSNR of recovered methods for Lena under word mask with by LTVNN and the other methods. Table 1 lists the PSNR results under word mask sample with for different images by LTVNN and the other methods. From Figure 4 and Table 1, we can see that the proposed method outperforms the other matrix completion methods under word mask for different images.

5. Conclusion

In this paper, we have proposed a new model that combines the nuclear norm and total variation norm for the matrix completion problem, which was then solved by ADMM model. Experimental results demonstrate the effectiveness of the proposed algorithm compared to other methods.

Appendix

Before we give the proof of Theorem 2, we supplement one proof about Without loss of generality, we take an example matrix and the corresponding transform matrix . Then, so the term . The proof of Theorem 2 is as follows.

Proof. Let be primal-dual optimization for the problem (27). The optimality conditions give where and . Then from (A.3), we deduce that and combine (A.4) with Lemma 1 that We observe (23) that , Here, we set ; then where , .
Based on (A.7), when , that is, , the term is nonincreasing and converges to limit. The parameter is very easy for satisfying this condition when is smaller constant. And we can obtain other properties as follows.
Let , and then . Due to the fact that converges to zero, so is infinite small about and converges to zero. Now we reconsider (A.2); evidently the first column in converges to zero; that is, , , , . The second column converges to the first column and then converges to zero; that is, , , , . The third column converges to the second column and then converges to zero; that is, , , , , so through the iteration converges to except the last column due to the definition in (4) and (5); the last column and the last row are set to zero.
Fortunately, this problem does not have side effect for global result. Theorem 2 is established.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Basic Research Program of China under Grant 2011CB707904, by the National Natural Science Foundation of China under Grants 61201344, 61271312, and 61073138, by the Ministry of Education of China under Grants 20110092110023 and 20120092120036, the Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, and by Natural Science Foundation of Jiangsu Province under Grant BK2012329. This work is supported by INSERM postdoctoral fellowship.