Skip to main content
Log in

No-Reference Image Quality Assessment and Blind Deblurring with Sharpness Metrics Exploiting Fourier Phase Information

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

It has been known for more than 30 years that most of the geometric content of a digital image is encoded in the phase of its Fourier transform. This has led to several works that exploit the global (Fourier) or local (Wavelet) phase information of an image to achieve quality assessment, edge detection, and, more recently, blind deblurring. We here propose a deeper insight into three recent sharpness metrics (global phase coherence, sharpness index, and a simplified version of it), which all measure in a probabilistic sense the surprisingly small total variation of an image compared to that of certain associated random phase fields. We exhibit several theoretical connections between these indices and study their behavior on a general class of stationary random fields. We also use experiments to highlight the behavior of these metrics with respect to blur, noise, and deconvolution artifacts (ringing). Finally, we propose an application to isotropic blind deblurring and illustrate its efficiency on several examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. The principle of a contrario methods is to detect structures as the cause of measurements that could hardly be observed in random data.

  2. The gradient covariance matrix of an image \(u\) is the value at \(\mathbf {z}=0\) of the gradient autocorrelation matrix \(\varGamma \) defined in Theorem 1.

  3. See Appendix 4 for the numerical computation of \(d(r,U)\).

  4. The computation of this oracle kernel is detailed in Appendix 5.

  5. This image is available on the web site http://www.mi.parisdescartes.fr/~moisan/sharpness/.

  6. We recall that the maximal degree of a graph is the maximal number of edges incident to a single vertex.

References

  1. Ayer, M., Brunk, H. D., Ewing, G. M., Reid, W. T., Silverman, E.: An empirical distribution function for sampling with incomplete information. Ann. Math. Stat., 641–647 (1955)

  2. Blanchet, G., Moisan, L., Rougé, B.: A linear prefilter for image sampling with ringing artifact control. Proc. Int. Conf. Image Process. (ICIP) 3, 577–580 (2005)

    Google Scholar 

  3. Blanchet, G., Moisan, L., Rougé, B.: Measuring the global phase coherence of an image. Proc. Int. Conf. Image Process. (ICIP), 1176–1179 (2008)

  4. Blanchet, G., Moisan, L.: An explicit sharpness index related to global phase coherence. Proc. Int. Conf. Acoust. Speech Sig. Process. (ICASSP), 1065–1068 (2012)

  5. Calderero, F., Moreno, P.: Evaluation of sharpness measures and proposal of a stop criterion for reverse diffusion in the context of image deblurring. In: Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP) (2013)

  6. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)

    MathSciNet  Google Scholar 

  7. Chandler, D.M.: Seven challenges in image quality assessment: past, present, and future research. ISRN Signal Processing, article id. 905685, (2013)

  8. Deriugin, N.G.: The power spectrum and the correlation function of the television signal. Telecommunications 1(7), 1–12 (1956)

    Google Scholar 

  9. Desolneux, A., Moisan, L., Morel, J.-M.: Dequantizing image orientation. IEEE Trans. Image Process. 11(10), 1129–1140 (2002)

    Article  MathSciNet  Google Scholar 

  10. Desolneux, A., Moisan, L., Morel, J.-M.: From gestalt theory to image analysis: a probabilistic approach. Springer, collection. Interdisciplinary Applied Mathematics, vol. 34 (2008)

  11. Desolneux, A., Moisan, L., Ronsin, S.: A compact representation of random phase and Gaussian textures. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1381–1384 (2012)

  12. Ekeland, I., Temam, R.: Convex Analysis and Variational Problems. SIAM, Classics in Applied Mathematics (1999)

  13. Ferzli, R., Karam, L.J.: A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans. Image Process. 18(4), 717–728 (2009)

    Article  MathSciNet  Google Scholar 

  14. Grosjean, B., Moisan, L.: A-contrario detectability of spots in textured backgrounds. J. Math. Imaging Vision 33(3), 313–337 (2009)

    Article  MathSciNet  Google Scholar 

  15. Frisen, M.: Unimodal regression. Stat. 35(4), 479–485 (1986)

    Google Scholar 

  16. Galerne, B., Gousseau, Y., Morel, J.-M.: Random phase textures: theory and synthesis. IEEE Trans. Image Process. 20(1), 257–267 (2011)

    Article  MathSciNet  Google Scholar 

  17. Hassen, R., Wang, Z., Salama, M.: No-reference image sharpness assessment based on local phase coherence measurement. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2434–2437 (2010)

  18. Janson, S.: Normal convergence by higher semiinvariants with applications to sums of dependent random variables and random graphs. Ann. Probab. 16(1), 305–312 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  19. Kovesi, P.: Phase congruency: a low-level image invariant. Psychol. Res. 64, 136–148 (2000)

    Article  Google Scholar 

  20. Kovesi, P.: Image features from phase congruency. Technical Report (1999)

  21. Louchet, C., Moisan, L.: Total variation denoising using iterated conditional expectation. In: Proceedings of European Signal Processing Conference (EUSIPCO) (2014)

  22. Leclaire, A., Moisan, L.: Blind deblurring using a simplified sharpness index. In: Proceedings of International Conference on Scale Space and Variational Methods in Computer Vision (SSVM), pp. 86–97 (2013)

  23. Leclaire, A., Moisan, L.: Une Variante Non Périodique du Sharpness Index. Actes du GRETSI (in french), pp. 86–97 (2013)

  24. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Efficient Marginal Likelihood Optimization in Blind Deconvolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2657–2664 (2011)

  25. Liu, Y., Wang, J., Cho, S., Finkelstein, A., Rusinkiewicz, S.: A no-reference metric for evaluating the quality of motion deblurring. ACM Trans. Gr. (TOG) 32(6), 175 (2013)

    Google Scholar 

  26. Marziliano, P., Dufaux, F., Winkler, S., Ebrahimi, T.: Perceptual blur and ringing metrics: application to JPEG2000. Sig. Process. 19(2), 163–172 (2004)

    Google Scholar 

  27. Moisan, L.: How to discretize the total variation of an image? Proc. Appl. Math. Mech. 7, 1041907–1041908 (2007)

    Article  Google Scholar 

  28. Moisan, L.: Periodic plus smooth image decomposition. J. Math. Imaging Vision 39(2), 120–145 (2011)

    Article  MathSciNet  Google Scholar 

  29. Morrone, M.C., Burr, D.C.: Feature detection in human vision: a phase-dependent energy model. In: Proceedings of the Royal Society of London, Series B, pp. 221–245 (1988)

  30. Oppenheim, A.V., Lim, J.S.: The importance of phase in signals. Proc. IEEE 69, 529–541 (1981)

    Article  Google Scholar 

  31. Ruderman, D.L.: The statistics of natural images. Network 5(4), 517–548 (1994)

    Article  MATH  Google Scholar 

  32. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)

    Article  MATH  Google Scholar 

  33. Shevtsova, I.: An improvement of convergence rate estimates in the Lyapunov theorem. Dokl. Math. 82, 862–864 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  34. Stout, Q.F.: Unimodal regression via prefix isotonic regression. Comput. Stat. Data Anal. 53, 289–297 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  35. Takeda, H., Farsiu, S., Milanfar, P.: Kernel regression for image processing and reconstruction. IEEE Trans. Image Process. 16(2), 349–366 (2007)

    Article  MathSciNet  Google Scholar 

  36. van Wijk, J.J.: Spot noise texture synthesis for data visualization. ACM SIGGRAPH Comput. Gr. 25(4), 309–318 (1991)

    Article  Google Scholar 

  37. Vu, C.T., Chandler, D.M.: S3: a spectral and spatial sharpness measure. In: Advances in Multimedia, Proceedings of the International Conference MMEDIA, pp. 37–43 (2009)

  38. Wang, Z., Bovik, A.: Modern Image Quality Assessment (Synthesis Lectures on Image, Video, and Multimedia Processing). Morgan & Claypool Publishers, San Rafael, CA (2006)

  39. Wang, Z., Simoncelli, E.P.: Local phase coherence and the perception of blur. Adv. Neural Inf. Process. Syst. (NIPS) 16, 786–792 (2004)

    Google Scholar 

  40. Zhu, X., Milanfar, P.: Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE Trans. Image Process. 19(12), 3116–3132 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lionel Moisan.

Appendices

Appendices

1.1 Appendix 1: Estimation of the Mean TV of a RPN

We saw in Theorem 1 (Eq. (10)) that

$$\begin{aligned} {\mathbb {E}}({\mathrm {TV}}(u*W)) = (\alpha _x + \alpha _y) \sqrt{\frac{2}{\pi }} \sqrt{MN}. \end{aligned}$$
(32)

The right-hand term of (32) appears to be a good approximation of \({\mathbb {E}}({\mathrm {TV}}(u_\psi ))\), that is, the mean TV in the RPN model. As noticed in [4], for most images the relative error is around \(1\,\%\) or below. In this Appendix, we will exhibit an upper bound of the absolute difference.

With the definition of TV, one can write

$$\begin{aligned} {\mathbb {E}}( {\mathrm {TV}}(u_\psi ) ) = \sum _{{\mathbf {x}}\in \varOmega } {\mathbb {E}}| \partial _x \dot{u}_\psi ({\mathbf {x}}) | + {\mathbb {E}}| \partial _y \dot{u}_\psi ({\mathbf {x}}) |, \end{aligned}$$

so that it is sufficient to show that \({\mathbb {E}}| \partial _x \dot{u}_\psi ({\mathbf {x}}) | \approx \alpha _x \sqrt{ \frac{2}{\pi MN}}\) for each \({\mathbf {x}}\in \varOmega \). This will follow from a Gaussian approximation of \(\partial _x \dot{u}_\psi ({\mathbf {x}})\) which implies

$$\begin{aligned} {\mathbb {E}}( | \partial _x \dot{u}_\psi ({\mathbf {x}}) | ) \approx \sqrt{\frac{2}{\pi }} \sqrt{ {\mathbb {E}}( ( \partial _x \dot{u}_\psi ({\mathbf {x}}) )^2 ) } \end{aligned}$$
(33)

(notice that the equality holds for a zero-mean Gaussian r.v., as shown by Lemma 4 of Appendix 1).

With the Fourier reconstruction formula, one can write that for all \({\mathbf {x}}\in \varOmega \),

$$\begin{aligned} \partial _x \dot{u}_\psi ({\mathbf {x}}) = \frac{1}{MN} \sum _{\varvec{\xi }\in \varOmega } | \hat{u}(\varvec{\xi }) | \mathrm{e}^{i \psi (\varvec{\xi })} \mathrm{e}^{i \langle {\mathbf {x}}, \varvec{\xi }\rangle } ( \mathrm{e}^{\frac{2i\pi x_1}{M}} - 1 ). \end{aligned}$$
(34)

For any \({\mathbf {x}}\in \varOmega \), the set \((\mathrm{e}^{i \psi (\varvec{\xi })} \mathrm{e}^{i \langle {\mathbf {x}}, \varvec{\xi }\rangle } )_{\varvec{\xi }\in \varOmega }\) is a random phase field. It follows that the r.v. \(|\partial _x \dot{u}_\psi ({\mathbf {x}})|\) are identically distributed, but they are not independent a priori. This is why we cannot use the central limit theorem directly on the sum \(\sum _{{\mathbf {x}}\in \varOmega } |\partial _x \dot{u}_\psi ({\mathbf {x}})| \; .\) Instead we will use a Gaussian approximation of each \(\partial _x \dot{u}_\psi ({\mathbf {x}})\) in order to derive a bound for the Gaussian approximation of \(\sum _{{\mathbf {x}}\in \varOmega } |\partial _x \dot{u}_\psi ({\mathbf {x}})|\).

The Gaussian approximation of \(\partial _x \dot{u}_\psi ({\mathbf {x}})\) will be precised with a Berry–Esseen theorem. First, to cope with the hermitian dependence, we have to introduce a subset \(\varOmega _+\) of \(\varOmega \) that contains exactly one point in each pair of symmetrical points, that is, such that

$$\begin{aligned} \varOmega \setminus \{0, \varvec{\eta }_x, \varvec{\eta }_y, \varvec{\eta }_{xy} \} = \varOmega _+ \cup (-\varOmega _+) \end{aligned}$$

and the union is disjoint. To make the following proof lighter, we will assume that if they exist, the Nyquist coefficients \(\hat{u}(\varvec{\eta }_x)\), \(\hat{u}(\varvec{\eta }_{xy})\), and \(\hat{u}(\varvec{\eta }_y)\) are equal to zero (in general, in natural images these coefficients are very small). Then we can write

$$\begin{aligned} u_\psi ({\mathbf {x}})= & {} |\hat{u}(\mathbf {0})| (-1)^{\varepsilon _0}\\&\quad +\, \frac{1}{MN} \sum _{\varvec{\xi }\in \varOmega _+} 2 |\hat{u}(\varvec{\xi })| \cos (\psi (\varvec{\xi }) + \langle {\mathbf {x}}, \varvec{\xi }\rangle ), \end{aligned}$$

and therefore

$$\begin{aligned} u_\psi (x_1+1,x_2) - u_\psi (x_1,x_2) = \frac{1}{MN} \sum _{\varvec{\xi }\in \varOmega _+} X_{\varvec{\xi }}, \end{aligned}$$

where we set for all \(\varvec{\xi }\in \varOmega _+\),

$$\begin{aligned} X_{\varvec{\xi }}= & {} 2 |\hat{u}(\varvec{\xi })| \left( \cos \Big ( \psi (\varvec{\xi }) + \langle {\mathbf {x}}, \varvec{\xi }\rangle + \frac{2\pi \xi _1}{M} \Big ) - \cos \big (\psi (\varvec{\xi }) + \langle {\mathbf {x}}, \varvec{\xi }\rangle \big ) \right) \\= & {} - 4 | \hat{u} (\varvec{\xi }) | \sin \Big (\psi (\varvec{\xi }) + \langle {\mathbf {x}}, \varvec{\xi }\rangle + \frac{\pi \xi _1}{M} \Big ) \sin \Big ( \frac{\pi \xi _1}{M} \Big ). \end{aligned}$$

Since the \(X_{\varvec{\xi }}\) are independent and centered r.v., we can apply the following generalization of Berry–Esseen Theorem (for non-identically distributed r.v.):

Theorem 2

(Berry–Esseen, 1942) Let \(X_1, \ldots , X_n\) be independent and centered r.v. in \(L^3\). Let us denote \(\sigma _i^2 = {\mathbb {E}}(X_i^2)\) and \(\rho _i = {\mathbb {E}}(|X_i|^3)\). Let \(F_n\) be the cumulative distribution function of

$$\begin{aligned} \frac{X_1 + \cdots + X_n}{( \sigma _1^2 + \cdots + \sigma _n^2)^{1/2}}. \end{aligned}$$

Then there exists a positive universal constant \(C_0\) such that

$$\begin{aligned} \forall t\in {\mathbb {R}}, \quad \left| F_n(t) - \mathbb {P} ( Y \le t ) \right| \le C_0 \psi _0, \end{aligned}$$

where \(Y \sim \mathcal {N}(0,1)\) and \(\displaystyle \psi _0 = \left( \sum _{i=1}^n \sigma _i^2 \right) ^{-3/2} \left( \sum _{i=1}^n \rho _i \right) \).

Concerning the value of \(C_0\), some recent papers (e.g., [33]) have shown that the best constant \(C_0\) is below \(0.56\).

Let us apply this theorem to the r.v. \(X_{\varvec{\xi }}\), \(\varvec{\xi }\in \varOmega _+\). Remark that if the r.v. \(U\) is uniformly distributed on \([0,2\pi ]\), then \({\mathbb {E}}(\sin ^2(U)) = \frac{1}{2}\) and \({\mathbb {E}}(|\sin (U)|^3) = \frac{4}{3\pi }\). Thus, we have for all \(\varvec{\xi }\in \varOmega _+\),

$$\begin{aligned}&\sigma _{\varvec{\xi }}^2 := {\mathbb {E}}(X_{\varvec{\xi }}^2) = 8 |\hat{u}(\varvec{\xi })|^2 \sin ^2 \left( \frac{\pi \xi _1}{M} \right) ,\\&\rho _{\varvec{\xi }} := {\mathbb {E}}(|X_{\varvec{\xi }}|^3) = \frac{4^4}{3\pi } |\hat{u}(\varvec{\xi })|^3 \left| \, \sin \left( \frac{\pi \xi _1}{M} \right) \right| ^3. \end{aligned}$$

Consequently,

$$\begin{aligned} \sum _{\varvec{\xi }\in \varOmega _+ } \sigma _{\varvec{\xi }}^2= & {} \sum _{\varvec{\xi }\in \varOmega _+} 8 |\hat{u}(\varvec{\xi })|^2 \sin ^2 \left( \frac{\pi \xi _1}{M} \right) \\= & {} \sum _{\varvec{\xi }\in \varOmega } 4 |\hat{u}(\varvec{\xi })|^2 \sin ^2 \left( \frac{\pi \xi _1}{M} \right) \\= & {} \sum _{\varvec{\xi }\in \varOmega } |\hat{u}(\varvec{\xi })|^2 \left| \mathrm{e}^{\frac{2 i \pi \xi _1}{M}} - 1 \right| ^2\\= & {} \big \Vert \widehat{\partial _x \dot{u}} \big \Vert _2^2 = MN \Vert \partial _x \dot{u} \Vert _2^2, \end{aligned}$$

and

$$\begin{aligned} \sum _{\varvec{\xi }\in \varOmega _+} \rho _{\varvec{\xi }}&= \frac{4^4}{3\pi } \sum \limits _{\varvec{\xi }\in \varOmega _+} |\hat{u}(\varvec{\xi })|^3 \left| \, \sin \left( \frac{\pi \xi _1}{M} \right) \right| ^3 = \frac{128}{3\pi } \big \Vert \widehat{\partial _x \dot{u}} \big \Vert _3^3. \end{aligned}$$

Hence, noticing that

$$\begin{aligned} \frac{1}{\sqrt{MN} \Vert \partial _x \dot{u} \Vert _2 } \sum _{\varvec{\xi }\in \varOmega _+ } X_{\varvec{\xi }} = \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; \partial _x \dot{u}_\psi ({\mathbf {x}}), \end{aligned}$$

and setting

$$\begin{aligned} \psi _0 = \frac{K(u)}{(MN)^{3/2}} \quad \text {with} \quad \ K(u) = \frac{\frac{128}{3\pi } \big \Vert \widehat{\partial _x \dot{u}} \big \Vert _3^3}{\Vert \partial _x \dot{u} \Vert _2^3 }, \end{aligned}$$

Theorem 2 ensures that for all \(t \in {\mathbb {R}}\),

$$\begin{aligned} \left| \mathbb {P} \left( \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; \partial _x \dot{u}_\psi ({\mathbf {x}}) \ge t \right) - \mathbb {P} (Y \ge t ) \right| \le \frac{C_0 K(u)}{(MN)^{3/2}}.\nonumber \\ \end{aligned}$$
(35)

Now, we write

$$\begin{aligned} {\mathbb {E}}\left( \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \right)= & {} \int _0^{+\infty } {\mathbb {P}}\left( \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \ge t \right) dt\\ \text {and} \;\; {\mathbb {E}}( Y )= & {} \int _0^{+\infty } {\mathbb {P}}( Y \ge t ) \, dt, \end{aligned}$$

and we split the integral into two parts : \(\int _0^{+\infty } = \int _0^A + \int _A^{+\infty }\). Inequality (35) can be integrated between \(0\) and \(A\) to give an upper bound of \(\int _0^A\), whereas the tail \(\int _A^{+\infty }\) can be treated using Bienaymé-Tchebitchev inequality:

$$\begin{aligned}&\mathbb {P} \left( \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \ge t \right) \\&\quad \le \frac{1}{t^2 MN \Vert \partial _x \dot{u} \Vert _2^2} {\mathbb {E}}\left( \sum _{\varvec{\xi }\in \varOmega _+ } X_{\varvec{\xi }} \right) ^2 \\&\quad = \frac{1}{t^2 MN \Vert \partial _x \dot{u} \Vert _2^2} \sum _{\varvec{\xi }\in \varOmega _+} \sigma _{\varvec{\xi }}^2 = \frac{1}{t^2}. \end{aligned}$$

Putting the two terms together, we have for all \(A>0\),

$$\begin{aligned} \left| {\mathbb {E}}\left( \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \right) - {\mathbb {E}}(|Y|) \right| \le \frac{2 C_0 K(u)}{(MN)^{3/2}} A + \frac{2}{A}, \end{aligned}$$

and then, choosing the best \(A\),

$$\begin{aligned} \left| {\mathbb {E}}\left( \frac{\sqrt{MN}}{\Vert \partial _x \dot{u} \Vert _2} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \right) - \sqrt{\frac{2}{\pi }} \right| \le 4 \frac{\sqrt{C_0 K(u)}}{(MN)^{3/4}}. \end{aligned}$$

Therefore, for all \({\mathbf {x}}\),

$$\begin{aligned}&\left| {\mathbb {E}}\left( \sqrt{MN} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \right) - \alpha _x \sqrt{\frac{2}{\pi }} \right| \le \frac{C_x(u)}{(MN)^{3/4}} \; ,\\&\text {where}\;\; C_x(u) = 4 \sqrt{C_0} \sqrt{ \frac{\frac{128}{3\pi } \big \Vert \widehat{\partial _x \dot{u}} \big \Vert _3^3}{\Vert \partial _x \dot{u} \Vert _2 } } \;. \end{aligned}$$

Recalling that \(\alpha _x = \Vert \partial _x \dot{u} \Vert _2\), one has

$$\begin{aligned}&\left| {\mathbb {E}}\big (\Vert \partial _x \dot{u}_\psi \Vert _1 \big ) - \alpha _x \sqrt{MN} \sqrt{\frac{2}{\pi }} \right| \\&\quad \le \frac{1}{\sqrt{MN}} \sum _{{\mathbf {x}}\in \varOmega } \left| {\mathbb {E}}\left( \sqrt{MN} \; | \partial _x \dot{u}_\psi ({\mathbf {x}}) | \right) - \alpha _x \sqrt{\frac{2}{\pi }} \right| \\&\quad \le \frac{1}{\sqrt{MN}} \sum _{{\mathbf {x}}\in \varOmega } \frac{C_x(u)}{(MN)^{3/4}}, \end{aligned}$$

and thus

$$\begin{aligned} \left| {\mathbb {E}}\big (\Vert \partial _x \dot{u}_\psi \Vert _1 \big ) - \alpha _x \sqrt{MN} \sqrt{\frac{2}{\pi }} \right| \le \frac{C_x(u)}{(MN)^{1/4}}. \end{aligned}$$
(36)

Finally, we obtain the following:

Theorem 3

If \(\psi \) is a discrete random phase field, then

$$\begin{aligned}&\quad \left| {\mathbb {E}}\big ( {\mathrm {TV}}(u_\psi ) \big ) - (\alpha _x + \alpha _y)\sqrt{MN} \sqrt{\frac{2}{\pi }} \right| \le \frac{C_x(u)+C_y(u)}{(MN)^{1/4}},\\&\quad \text {where}\;\; \forall a\in \{x,y\}, \quad C_\mathrm{a}(u) = 32 \sqrt{\frac{2C_0}{3\pi }} \sqrt{\frac{ \big \Vert \widehat{\partial _\mathrm{a} \dot{u}} \big \Vert _3^3}{\Vert \partial _\mathrm{a} \dot{u} \Vert _2 }}. \end{aligned}$$

Theorem 3 provides an explicit bound on the absolute error between the mean TV of a RPN and the exact formula (32) obtained for the associated Gaussian field, but this error bound depends on the considered image and all terms tend to increase with the image size. We can write a normalized inequality by dividing (36) by \(\alpha _x \sqrt{2MN/\pi }\), so that

$$\begin{aligned} \left| \frac{{\mathbb {E}}\big ( \Vert \partial _x \dot{u}_\psi \Vert _1 \big )}{\alpha _x \sqrt{MN}} \sqrt{\frac{\pi }{2}} -1 \right| \le c_x(u) , \end{aligned}$$
(37)

where the relative error bound is now

$$\begin{aligned} c_x(u) := \frac{32}{(MN)^{3/4}} \sqrt{\frac{C_0}{3}} \sqrt{ \frac{ \big \Vert \widehat{\partial _x \dot{u}} \big \Vert _3^3}{\Vert \partial _x \dot{u} \Vert _2^3 }} = 32 \sqrt{\frac{C_0}{3}} \sqrt{ \frac{ \big \Vert \widehat{\partial _x \dot{u}} \big \Vert _3^3}{ \big \Vert \widehat{ \partial _x \dot{u} } \big \Vert _2^3 }} \end{aligned}$$

(of course, one would obtain a similar inequality for the \(y\) component).

Taking \(C_0 = 0.56\), one can compute values of \(c_x\) for different natural images. For example, \(c_x(u) \approx 1.025\) for the \(512\times 512\) Lena image, while \(c_x(u) \approx 0.337\) for the 13 Mpixels Lotriver imageFootnote 5. The bound is quite useless for Lena, and still far from sharp for Lotriver (numerical computations seem to indicate that the true values of the left-hand term of (37) are below \(10^{-4}\) for these two images).

Even if it does not provide an accurate error bound, Theorem 3 remains interesting because it indicates that (32) provides the correct asymptotical estimate of the mean TV of a RPN when the image size tends to infinity. Indeed, it has been known for a long time that natural images statistically exhibit a power-law Fourier spectrum (see [8] and other references in [31]), that is,

$$\begin{aligned} |\hat{u}(\varvec{\xi })| \propto |\varvec{\xi }|^{-\alpha } \end{aligned}$$
(38)

in average, where \(\alpha \) is a bit larger than 1 in general. Using (38) in the expression of \(c_x\) above, one easily obtains that for a \(R\times R\) image, \(c_x\propto R^{-1/2}\) as \(R\rightarrow \infty \), provided that \(\alpha <5/3\). This suggests that the bound \(c_x\) tends to decrease to \(0\) when the size of the considered image increases.

1.2 Appendix 2: Gaussian Approximation of \({\mathrm {TV}}(W)\)

We would like to prove that \({\mathrm {TV}}(u_\psi )\) and \({\mathrm {TV}}(u*W)\) approximately (or asymptotically) follow Gaussian distributions. Unfortunately, as we already said in the previous Appendix, we cannot apply a classical central limit theorem because the r.v. appearing in the TV formula are not independent. These dependencies introduce a lot of difficulties and this is why we shall here focus on a much simpler problem, that is, the asymptotical distribution of \({\mathrm {TV}}(W)\) (which is the TV of the Gaussian model in the particular case \(u=\delta _0\)).

Proposition 8

Let \((\varOmega _n)_{n\ge 0}\) be a sequence of rectangular domains of \({\mathbb {Z}}^2\) such that \(|\varOmega _n| \rightarrow \infty \) when \(n\) tends to \(\infty \), and let \((W_n({\mathbf {x}}))_{{\mathbf {x}}\in \varOmega _n}\) be a set of i.i.d. r.v. with distribution \(\mathcal {N} \left( 0,|\varOmega _n|^{-1/2} \right) \). Then one has

$$\begin{aligned}&{\mathrm {TV}}(W_n) - {\mathbb {E}}({\mathrm {TV}}(W_n)) \xrightarrow {\; d} \mathcal {N}(0,\sigma ^2), \;\;\text {where}\\&{\mathbb {E}}({\mathrm {TV}}(W_n)) = \frac{4|\varOmega _n|^{1/2}}{\sqrt{\pi }} \text { and } \sigma ^2 = \frac{8}{\pi } \left( \omega (1) + 6 \cdot \omega \Big ( \frac{1}{2} \Big ) \right) . \end{aligned}$$

To prove this result, we will use the central limit theorem given in [18], which applies to a set of r.v. whose dependencies are controlled through their dependency graph.

Definition 5

([18]) A graph \(\varGamma \) is a dependency graph for a set of r.v. if the following two conditions are satisfied:

  1. 1.

    There exists a one-to-one correspondence between the r.v. and the vertices of the graph.

  2. 2.

    If \(V_1\) and \(V_2\) are two disjoint sets of vertices of \(\varGamma \) such that no edge of \(\varGamma \) has one endpoint in \(V_1\) and the other in \(V_2\), then the corresponding sets of r.v. are independent.

Now, we can recall the

Theorem 4

(Janson [18]) Suppose, for each integer \(n\), that \((X_{n,i})_{i = 1, \ldots , N_n}\) is a set of r.v. satisfying \(|X_{n,i}| \le A_n\) a.s. for all \(i\). Suppose further that \(\varGamma _n\) is a dependency graph for this set and let \(M_n\) be the maximal degreeFootnote 6 of \(\varGamma _n\) (unless \(\varGamma _n\) has no edges at all, in which case we set \(M_n = 1\)). Let \(S_n = \sum _{i=1}^{N_n} X_{n,i}\) and \(\sigma _n^2 = {\mathbb {V}}{\mathrm {ar}}(S_n)\). If there exists an integer \(m\) such that

$$\begin{aligned} \left( \frac{N_n}{M_n} \right) ^{1/m} \; \frac{M_n A_n}{\sigma _n} \rightarrow 0 \quad \hbox {as} \quad n \rightarrow \infty , \end{aligned}$$
(39)

then \(\frac{S_n - {\mathbb {E}}(S_n)}{\sigma _n} \rightarrow \mathcal {N}(0,1)\) in distribution as \(n \rightarrow \infty \).

First, we will clarify the remark following this theorem in [18]. It states that we can replace the boundedness hypothesis

$$\begin{aligned}&\forall n, \quad \forall i, \quad |X_{n,i}| \le A_n \quad \hbox {a.s.}\nonumber \\&\text {by}\;\; \frac{M_n}{\sigma _n^2} \sum _{i=1}^{N_n} {\mathbb {E}}( X_{n,i}^2 \mathbf {1}_{ |X_{n,i}| > A_n } ) \rightarrow 0 \quad \hbox {as} \; n \rightarrow \infty . \end{aligned}$$
(40)

Indeed, assume that (40) is true. We use the truncation argument suggested in [18] and set

$$\begin{aligned}&X_{n,i}^T = X_{n,i} \; \mathbf {1}_{|X_{n,i}| \le A_n },\\&S_n^T = \sum _{i=1}^{N_n} X_{n,i}^T, \quad \text {and}\quad (\sigma _n^T)^2 = {\mathbb {V}}{\mathrm {ar}}(S_n^T). \end{aligned}$$

It is clear that the variables \(X_{n,i}^T\) have the same dependency degree than the \(X_{n,i}\). We will see that (39) is still true for \(\sigma _n^T\), so that Janson’s Theorem will give

$$\begin{aligned} \frac{S_n^T - {\mathbb {E}}(S_n^T ) }{\sigma _n^T} \xrightarrow {\; d} \mathcal {N}(0,1). \end{aligned}$$

But first let us explain how we control the residual sum. One can write

$$\begin{aligned}&\frac{S_n - {\mathbb {E}}(S_n)}{\sigma _n} - \frac{S_n^T - {\mathbb {E}}(S_n^T)}{\sigma _n} \\&\quad =\frac{1}{\sigma _n} \sum _{i=1}^{N_n} \left( X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } - {\mathbb {E}}( X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } ) \right) . \end{aligned}$$

For a fixed \(n\), setting

$$\begin{aligned} T_i = X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } - {\mathbb {E}}( X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } ) \end{aligned}$$

(which again have a dependency degree smaller than \(M_n\)) and writing \(i \sim j\) if \(T_i\) and \(T_j\) are not independent, one can write

$$\begin{aligned} {\mathbb {E}}\left( \bigg ( \sum _i T_i \bigg )^2 \right)= & {} \sum _{i,j} {\mathbb {E}}( T_i T_j) \\= & {} \sum _i \sum _{j \sim i} {\mathbb {E}}(T_i T_j) \\\le & {} \frac{1}{2} \sum _i \sum _{j \sim i} {\mathbb {E}}(T_i^2) + {\mathbb {E}}(T_j^2) \\= & {} \frac{1}{2} \sum _i \sum _{j \sim i} {\mathbb {E}}(T_i^2) + \frac{1}{2} \sum _j \sum _{i \sim j} {\mathbb {E}}(T_j^2) \\\le & {} (M_n + 1) \sum _i {\mathbb {E}}(T_i^2) \\\le & {} 2 M_n \sum _i {\mathbb {E}}(T_i^2) , \end{aligned}$$

which gives

$$\begin{aligned}&{\mathbb {E}}\left( \frac{1}{\sigma _n^2} \left( \sum _{i=1}^{N_n} X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } - {\mathbb {E}}( X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } ) \right) ^2 \right) \\&\quad \le 2 \; \frac{M_n}{\sigma _n^2} \sum _{i = 1}^{N_n} {\mathbb {V}}{\mathrm {ar}}( X_{n,i} \; \mathbf {1}_{|X_{n,i}| > A_n } ) \\&\quad \le 2 \; \frac{M_n}{\sigma _n^2} \sum _{i =1}^{N_n} {\mathbb {E}}( X_{n,i}^2 \; \mathbf {1}_{|X_{n,i}| > A_n } ). \end{aligned}$$

Therefore, (40) gives that

$$\begin{aligned} \frac{S_n - {\mathbb {E}}(S_n)}{\sigma _n} - \frac{S_n^T - {\mathbb {E}}(S_n^T)}{\sigma _n} \xrightarrow {\; L^2} 0. \end{aligned}$$
(41)

To conclude, it remains to show that \(\frac{\sigma _n^T}{\sigma _n} \rightarrow 1\) as \(n\) tends to \(\infty \). Indeed, it is thus equivalent to check condition (39) for \(\sigma _n\) or \(\sigma _n^T\), so that we are able to apply Janson’s theorem to obtain

$$\begin{aligned} \frac{S_n^T - {\mathbb {E}}(S_n^T)}{\sigma _n^T} \xrightarrow {\; d} \mathcal {N}(0,1). \end{aligned}$$
(42)

Moreover, it implies that the distributional convergence of \(\frac{S_n^T - {\mathbb {E}}(S_n^T)}{\sigma _n^T}\) is equivalent to the one of \(\frac{S_n^T - {\mathbb {E}}(S_n^T)}{\sigma _n}\). To show that \(\sigma _n\) and \(\sigma _n^T\) are equivalent, notice that (41) and the reverse Minkowski inequality (in \(L^2\)) give

$$\begin{aligned} \left\| \frac{S_n - {\mathbb {E}}(S_n) }{\sigma _n} \right\| _{L^2} - \left\| \frac{S_n^T - {\mathbb {E}}(S_n^T) }{\sigma _n} \right\| _{L^2} \rightarrow 0 \; , \end{aligned}$$

which is exactly

$$\begin{aligned} 1 - \frac{\sigma _n^T}{\sigma _n} \rightarrow 0. \end{aligned}$$
(43)

Finally, putting together (41), (42), and (43), we obtain that

$$\begin{aligned} \frac{S_n - {\mathbb {E}}(S_n) }{\sigma _n} \xrightarrow {\; d} 0. \end{aligned}$$

\(\square \)

Let us now get into the details of the application to the TV of a white Gaussian noise. For \({\mathbf {x}}\in \varOmega _n\), we will set

$$\begin{aligned} Z_{n, {\mathbf {x}}}= & {} | \dot{W_n}(x+1,y) -\dot{W_n}(x,y)| \\&\quad +\, | \dot{W_n}(x,y+1) - \dot{W_n}(x,y)|, \end{aligned}$$

so that \({\mathrm {TV}}(W_n) = \sum _{{\mathbf {x}}\in \varOmega _n} Z_{n, {\mathbf {x}}}\). With these notations, we will be able to apply Janson’s theorem on this sum with \(M_n = 6\). Indeed, for a fixed \({\mathbf {x}}=(x,y) \in \varOmega _n\), the variables \(\dot{W_n}(x+1,y)\), \(\dot{W_n}(x,y+1)\), and \(\dot{W_n}(x,y)\) appear in \(Z_{n, {\mathbf {x}}}\). These two variables also appear in \(Z_{n, (x-1,y)}, Z_{n, (x-1,y+1)}, Z_{n, (x,y-1)}\), \(Z_{n, (x+1,y-1)}, Z_{n, (x+1,y)}, Z_{n, (x,y+1)}\), and do not appear in any other \(Z_{n, {\mathbf {x}}}, {\mathbf {x}}\in \varOmega _n\). That is why we can set \(M_n = 6\).

Next, to apply the theorem, we also need to know the variance of the sum. It is actually independent of \(n\) and given by Theorem 1:

$$\begin{aligned} \sigma ^2 = \sigma _n^2 = {\mathbb {V}}{\mathrm {ar}}({\mathrm {TV}}(W_n)) = \frac{8}{\pi } ( \omega (1) + 6 \cdot \omega (1/2) ). \end{aligned}$$

Notice that the theorem also gives

$$\begin{aligned} {\mathbb {E}}({\mathrm {TV}}(W_n)) = \frac{4}{\sqrt{\pi }} |\varOmega _n|^{1/2}. \end{aligned}$$

Now, it remains to find a sequence \(A_n\) which satisfies both (39) and (40). Since in our case \(M_n\) and \(\sigma _n\) are constant, we must find \(A_n\) and an \(m\) such that

$$\begin{aligned} |\varOmega _n|^{1/m} A_n \rightarrow 0 \quad \hbox {and} \quad \sum _{{\mathbf {x}}\in \varOmega _n} {\mathbb {E}}( Z_{n, {\mathbf {x}}}^2 \mathbf {1}_{ |Z_{n, {\mathbf {x}}}| > A_n } ) \rightarrow 0 \end{aligned}$$

as \(n \rightarrow \infty \). Since all the \(Z_{n, {\mathbf {x}}}\) follow the Gaussian distribution with standard deviation \(2|\varOmega _n|^{-1/2}\), the second condition is equivalent to

$$\begin{aligned} {\mathbb {E}}\left( Z^2 \mathbf {1}_{|Z| > A_n |\varOmega _n| } \right) \rightarrow 0. \end{aligned}$$

Hence, it suffices to find \(A_n\) and an \(m\) such that

$$\begin{aligned} |\varOmega _n|^{1/m} A_n \rightarrow 0 \quad \hbox {and} \quad A_n |\varOmega _n | \rightarrow \infty . \end{aligned}$$

We can take \(m=3\) and \(A_n = |\varOmega _n|^{-1/2}\). The two conditions are satisfied, and with Janson’s theorem we obtain the result of Proposition 8. \(\square \)

Remark

One can point out that we applied a powerful central limit theorem in order to prove a very specific case. In fact, one can adapt the preceding proof to show that, as soon as \(u\) has compact support in \(\varOmega _n\) with \(|\varOmega _n| \rightarrow \infty \), we have normal convergence of \({\mathbb {E}}({\mathrm {TV}}(u*W))\) after centralization and normalization.

1.3 Appendix 3: Proof of Theorem 2.1

Before proving Theorem 1, let us give two lemmas about Gaussian random vectors.

Lemma 4

Let \(X\) be a Gaussian r.v. with zero mean and variance \(\sigma ^2\). Then \({\mathbb {E}}(|X|) = \sigma \sqrt{\frac{2}{\pi }}\).

Proof

Since \(X \sim \mathcal {N}(0,\sigma ^2)\), one can write

$$\begin{aligned} E(|X|)= & {} \frac{1}{\sigma \sqrt{2\pi }} \int _{-\infty }^{+\infty } |x| \mathrm{e}^{-\frac{x^2}{2\sigma ^2}} \mathrm{d}x\\= & {} \frac{2}{\sigma \sqrt{2\pi }} \int _0^{+\infty } x \mathrm{e}^{- \frac{x^2}{2\sigma ^2}} \mathrm{d}x \\= & {} \frac{2}{\sigma \sqrt{2\pi }} \left[ - \sigma ^2\mathrm{e}^{-\frac{x^2}{2\sigma ^2} } \right] _0^{+\infty } = \sigma \sqrt{\frac{2}{\pi }}. \end{aligned}$$

\(\square \)

Lemma 5

Let \(Z = (X,Y)^T\) be a Gaussian random vector with zero mean and covariance matrix

$$\begin{aligned} {\mathbb {E}}(Z Z^T) = \begin{pmatrix} a^2 &{} \quad ab \sin \theta \\ ab \sin \theta &{}\quad b^2 \end{pmatrix}, \end{aligned}$$

with \(\theta \in [- \frac{\pi }{2} , \frac{\pi }{2} ]\). Then, one has

$$\begin{aligned} {\mathbb {E}}(|XY|) = \frac{2|ab|}{\pi } (\cos \theta + \theta \sin \theta ) \; . \end{aligned}$$

Proof

If \(a = 0\) or \(b = 0\), then \({\mathbb {E}}(XY) = 0\) so there is nothing more to prove. Hence, we can assume that \(ab \ne 0\) and set \(X' = X/a\), \(Y' = Y/b\), so that

$$\begin{aligned} {\mathbb {E}}|XY| = |ab| \cdot {\mathbb {E}}|X'Y'|, \end{aligned}$$
(44)

where the covariance of \(Z' = (X',Y')^T\) is

$$\begin{aligned} C = {\mathbb {E}}( Z' Z'^T) = \begin{pmatrix} 1 &{} \quad \sin \theta \\ \sin \theta &{}\quad 1 \end{pmatrix}. \end{aligned}$$

If \(|\sin \theta | = 1\), then \(Y' = X' \sin \theta \) almost surely, so that \({\mathbb {E}}|X'Y'| = {\mathbb {E}}X'^2 = 1\) and \({\mathbb {E}}| XY | = |ab|\) by (44). Hence, we assume in the following that \(|\theta | < \frac{\pi }{2}\). Now we have

$$\begin{aligned} C^{-1} = \frac{1}{\cos ^2 \theta } \begin{pmatrix} 1 &{} \quad - \sin \theta \\ - \sin \theta &{} \quad 1 \end{pmatrix}, \end{aligned}$$

so that \({\mathbb {E}}|X'Y'|\) equals

$$\begin{aligned} \frac{1}{2\pi \cos \theta } \int _{{\mathbb {R}}^2} |xy| \exp \left( - \frac{x^2+y^2 - 2xy \sin \theta }{2\cos ^2 \theta } \right) \mathrm{d}x \mathrm{d}y. \end{aligned}$$

Using symmetry considerations, this formula can be rewritten under the form

$$\begin{aligned}&{\mathbb {E}}|X'Y'| = \frac{I(\theta ) + I(-\theta )}{\pi \cos \theta }\nonumber \\&\text {with }I(\theta ) = \int _0^{+\infty } \int _0^{+\infty } xy\nonumber \\&\qquad \qquad \qquad \,\,\times \,\exp \left( -\frac{x^2+y^2 - 2xy \sin \theta }{2 \cos ^2 \theta } \right) \mathrm{d}x \mathrm{d}y. \end{aligned}$$
(45)

Using polar coordinates, we then get

$$\begin{aligned}&I(\theta ) = \int _0^{+\infty } \int _0^{\frac{\pi }{2}} r^2 \cos \varphi \sin \varphi \\&\qquad \qquad \times \, \exp \left( - \frac{r^2}{2\cos ^2 \theta } ( 1 - 2 \cos \varphi \sin \varphi \sin \theta ) \right) r \; \mathrm{d}r \mathrm{d}\varphi \\&\,\,\;\qquad = \int _0^{\frac{\pi }{2}} \left( \cos \varphi \sin \varphi \int _0^{+\infty } r^3 \mathrm{e}^{-\alpha (\varphi ) r^2} \mathrm{d}r \right) \mathrm{d}\varphi ,\\&\text {with}\;\; \alpha (\varphi ) = \frac{1 - 2 \cos \varphi \sin \varphi \sin \theta }{2 \cos ^2 \theta } \ge 0. \end{aligned}$$

Integrating by part the inside integral yields

$$\begin{aligned}&\int _0^{+\infty } r^3 \mathrm{e}^{-\alpha (\varphi ) r^2} \mathrm{d}r \\&\quad = \left[ r^2 \cdot \frac{1}{-2 \alpha (\varphi )} \mathrm{e}^{-\alpha (\varphi ) r^2} \right] _0^{+\infty }\\&\qquad - \frac{1}{-2\alpha (\varphi )} \int _0^{+\infty } 2 r \mathrm{e}^{-\alpha (\varphi ) r^2} \mathrm{d}r \\&\quad = \frac{1}{2\alpha (\varphi )^2}. \end{aligned}$$

Thus, we have

$$\begin{aligned} I(\theta )= & {} \int _0^{\frac{\pi }{2}} \cos \varphi \sin \varphi \cdot \frac{ (2\cos ^2 \theta )^2}{2(1-2\cos \varphi \sin \varphi \sin \theta )^2} \mathrm{d}\varphi \\= & {} 2 \cos ^4 \theta \cdot \int _0^{\frac{\pi }{2}} \frac{\tan \varphi }{(\cos ^{-2}\varphi - 2 \tan \varphi \sin \theta )^2} \; \frac{\mathrm{d}\varphi }{\cos ^2 \varphi } \\= & {} 2 \cos ^4 \theta \cdot \int _0^{+\infty } \frac{t}{(1 + t^2 - 2t \sin \theta )^2} \mathrm{d}t \quad (t = \tan \varphi ) \\= & {} 2 \cos ^4 \theta \cdot \int _0^{+\infty } \frac{t}{((t- \sin \theta )^2 + \cos ^2 \theta )^2} \mathrm{d}t \\= & {} 2 \cos ^4 \theta \cdot \int _{-\sin \theta }^{+\infty } \frac{u + \sin \theta }{(u^2 + \cos ^2 \theta )^2} \mathrm{d}u \quad (u = t - \sin \theta ). \end{aligned}$$

Now, usual integration formulae give (for \(a>0\))

$$\begin{aligned}&\int \frac{u}{(u^2 + a^2)^2} \mathrm{d}u = \frac{-1}{2(u^2+a^2)}\\&\hbox {and} \quad \int \frac{1}{(u^2 + a^2)^2} \mathrm{d}u = \frac{1}{2a^3} \arctan \frac{u}{a} + \frac{u}{2a^2(u^2+a^2)}, \end{aligned}$$

so that \(I(\theta )\) equals

$$\begin{aligned}&I(\theta ) = 2\cos ^4 \theta \Bigg ( \left[ \frac{-1}{2(u^2 + \cos ^2 \theta )^2 } \right] _{-\sin \theta }^{+\infty }\\&\qquad + \sin \theta \left[ \frac{1}{2 \cos ^3 \theta } \arctan \frac{u}{\cos \theta } + \frac{u}{2 \cos ^2 \theta (u^2 + \cos ^2 \theta )} \right] _{- \sin \theta }^{+\infty } \Bigg )\\&\quad = 2 \cos ^4 \theta \left( \frac{1}{2} + \sin \theta \left( \frac{\pi }{2 \cos ^3 \theta } + \frac{\theta }{2 \cos ^3 \theta } + \frac{\sin \theta }{2 \cos ^2 \theta } \right) \right) \\&\quad = \cos ^4 \theta + \pi \sin \theta \cos \theta + \theta \sin \theta \cos \theta + \sin ^2 \theta \cos ^2 \theta \\&\quad = \cos ^2 \theta + \pi \sin \theta \cos \theta + \theta \sin \theta \cos \theta . \end{aligned}$$

Then, \(I(\theta ) + I(-\theta ) = 2 \cos \theta ( \cos \theta + \theta \sin \theta )\) and we conclude by (44) and (45) that

$$\begin{aligned} {\mathbb {E}}|XY| = \frac{2|ab|}{\pi } ( \cos \theta + \theta \sin \theta ) \; .\end{aligned}$$

\(\square \)

Proof of Theorem 1

Writing \(U = u*W\), we have by linearity

$$\begin{aligned} \partial _x \dot{U} = (\partial _x \dot{u}) * W, \end{aligned}$$

so that the discrete random field \(\partial _x \dot{U}\) is a stationary Gaussian field whose marginal distributions have zero mean and variance

$$\begin{aligned} {\mathbb {E}}( (\partial _x \dot{U} ({\mathbf {x}}) )^2 ) = \frac{1}{MN} \sum _{{\mathbf {y}}\in \varOmega } (\partial _x \dot{u} ({\mathbf {x}}-{\mathbf {y}}) )^2 = \frac{\alpha _x^2}{MN} \; . \end{aligned}$$

From Lemma 4, we hence get that for any \({\mathbf {x}}\in \varOmega \),

$$\begin{aligned} {\mathbb {E}}( | \partial _x \dot{U}({\mathbf {x}}) | ) = \frac{\alpha _x}{\sqrt{MN}} \sqrt{\frac{2}{\pi }}, \end{aligned}$$

and by using a similar reasoning on \(\partial _y \dot{U}\), we obtain (10).

We now consider the variance of \({\mathrm {TV}}(U)\). We have

$$\begin{aligned} {\mathbb {E}}( {\mathrm {TV}}(U)^2 )= & {} \sum _{{\mathbf {x}},{\mathbf {y}}\in \varOmega } {\mathbb {E}}| \partial _x \dot{U}({\mathbf {x}}) \partial _x \dot{U} ({\mathbf {y}}) | + {\mathbb {E}}| \partial _x \dot{U} ({\mathbf {x}}) \partial _y \dot{U}({\mathbf {y}}) |\\&\quad +\, {\mathbb {E}}| \partial _y \dot{U} ({\mathbf {x}}) \partial _x \dot{U} ({\mathbf {y}}) | + {\mathbb {E}}| \partial _y \dot{U} ({\mathbf {x}}) \partial _y \dot{U} ({\mathbf {y}}) |. \end{aligned}$$

Writing \(\mathbf {z}= {\mathbf {y}}- {\mathbf {x}}\) and using the stationarity of \(\nabla \dot{U}\), the quantity \({\mathbb {E}}({\mathrm {TV}}(U)^2)\) can be rewritten as

$$\begin{aligned}&MN \sum _{{\mathbf {x}}\in \varOmega , {\mathbf {y}}\in \varOmega } {\mathbb {E}}| \partial _x \dot{U} (0) \partial _x \dot{U} (\mathbf {z}) | + {\mathbb {E}}| \partial _x \dot{U} (0) \partial _y \dot{U} (\mathbf {z}) | \nonumber \\&\quad +\, {\mathbb {E}}| \partial _y \dot{U} (0) \partial _x \dot{U} (\mathbf {z}) | + {\mathbb {E}}| \partial _y \dot{U} (0) \partial _y \dot{U} (\mathbf {z}) |. \end{aligned}$$
(46)

Each term of this sum can be written under the form \({\mathbb {E}}| XY |\) where \((X,Y)\) is a zero-mean 2-dimensional Gaussian vector with covariance matrix

$$\begin{aligned} \begin{pmatrix} {\mathbb {E}}(X^2) &{}\quad \! {\mathbb {E}}(XY) \\ {\mathbb {E}}(XY) &{}\quad \! {\mathbb {E}}(Y^2) \end{pmatrix}. \end{aligned}$$

For the second term of (46), for example, we have \(X = \partial _x \dot{U} (0)\) and \(Y = \partial _y \dot{U} (\mathbf {z})\), thus

$$\begin{aligned} {\mathbb {E}}(XY)= & {} {\mathbb {E}}\left( \sum _{{\mathbf {x}}\in \varOmega , {\mathbf {y}}\in \varOmega } \partial _x \dot{u} (-{\mathbf {x}}) \partial _y \dot{u} (\mathbf {z}- {\mathbf {y}}) W({\mathbf {x}}) W({\mathbf {y}}) \right) \\= & {} \frac{1}{MN} \sum _{{\mathbf {x}}\in \varOmega } \partial _x \dot{u} ({\mathbf {x}}) \partial _y \dot{u}(\mathbf {z}+{\mathbf {x}}) = \frac{1}{MN} \varGamma _{xy} (\mathbf {z}) \end{aligned}$$

and the covariance matrix of \((X,Y)\) is

$$\begin{aligned} \frac{1}{MN} \begin{pmatrix} \alpha _x^2 &{}\quad \! \varGamma _{xy} (\mathbf {z}) \\ \varGamma _{xy}(\mathbf {z}) &{}\quad \! \alpha _y^2 \end{pmatrix}, \end{aligned}$$

so that thanks to Lemma 5 we obtain

$$\begin{aligned} {\mathbb {E}}| XY | = \frac{2 \alpha _x \alpha _y}{\pi MN} \cdot \widetilde{\omega } \left( \frac{\varGamma _{xy}(\mathbf {z})}{\alpha _x \alpha _y} \right) , \end{aligned}$$

with \(\widetilde{\omega }(t) = t \arcsin t + \sqrt{1 - t^2} = \omega (t) + 1\). Combining all terms arising from (46), we finally obtain that

$$\begin{aligned} {\mathbb {E}}( {\mathrm {TV}}(U)^2 )= & {} \frac{2}{\pi } \sum _{\mathbf {z}\in \varOmega } \alpha _x^2 \widetilde{\omega } \left( \frac{\varGamma _{xx}(\mathbf {z})}{\alpha _x^2} \right) \nonumber \\&\quad +\, 2\alpha _x \alpha _y \widetilde{\omega } \left( \frac{\varGamma _{xy}(\mathbf {z})}{\alpha _x \alpha _y} \right) + \alpha _y^2 \widetilde{\omega } \left( \frac{\varGamma _{yy} (\mathbf {z})}{\alpha _y^2} \right) \end{aligned}$$
(47)

and the announced result follows from

$$\begin{aligned} {\mathbb {V}}{\mathrm {ar}}( {\mathrm {TV}}(U) ) = {\mathbb {E}}( {\mathrm {TV}}(U)^2) - ( {\mathbb {E}}({\mathrm {TV}}(U)) )^2, \end{aligned}$$

which simply amounts to change \(\widetilde{\omega }\) into \(\omega \) in (47). \(\square \)

1.4 Appendix 4: Unimodal Regression

In this appendix, we detail an algorithm to compute the distance from a signal \(s = (s(1),s(2), \ldots , s(n) ) \in {\mathbb {R}}^{n}\) to the set \(U\) of unimodal signals of size \(n\), defined by

$$\begin{aligned}&U = \bigcup _{1\le i\le n} C_i \cap D_i,\\&\text {where } C_i = \{p\in {\mathbb {R}}^n, \; p(1)\le p(2) \le \cdots \le p(i)\}\\&\text {and }D_i = \{p\in {\mathbb {R}}^n, \; p(i)\ge p(i+1) \ge \cdots \ge p(n)\} \end{aligned}$$

(with the natural convention \(C_1=D_n={\mathbb {R}}^n\)). The algorithm we use is due to Frisen [15]. It is based on the fact that \(U\) can also be written as

$$\begin{aligned} U = \bigcup _{1\le i\le n-1} C_i \cap D_{i+1}, \end{aligned}$$

which entails \(d(s,U) = \min _{1\le i\le n-1} d_i\) with

$$\begin{aligned} d_i^2= & {} \min _{p\in C_i\cap D_{i+1}} \Vert p-s\Vert _2^2\\= & {} \min _{p\in C_i} \sum _{k=1}^i (p(k)-s(k))^2 + \min _{q\in D_{i+1}} \sum _{k=i+1}^n (q(k)-s(k))^2. \end{aligned}$$

These two monotone regression problems are independent and can be solved in time \(\mathcal {O}(n)\) using the simple Pool Adjacent Violators algorithm described in [1] (see Algorithm 4). Thus, the computation of \(d(s,U)\) can be realized in time \(\mathcal {O}(n^2)\) (Algorithm 5). Note that in fact the unimodal regression problem can be solved in time \(\mathcal {O}(n)\) with a more sophisticated algorithm (see [34]), but considering the small value of \(n\) we use in Sect. 5.3 (\(n=20\)), the gain obtained with this algorithm would be negligible compared to other steps (e.g., Fourier transforms) of the deblurring process.

figure g
figure h

1.5 Appendix 5: Oracle Deconvolution Filter

Consider a blurry and noisy image \(v = \kappa * u_0 + n\), obtained from an image \(u_0\) after a convolution by a kernel \(\kappa \) and the addition of a Gaussian white noise \(n\) with standard deviation \(\sigma ^2\). In this appendix, we show how to compute the oracle kernel \(k_0\) which provides, in average with respect to \(n\), the best linear estimate of \(u_0\) that can be computed from \(v\). This oracle kernel is defined by

$$\begin{aligned} k_0 = \arg \min _k \; {\mathbb {E}}\left( \Vert u_0 - k*( \kappa * u_0 + W) \Vert _2^2 \right) , \end{aligned}$$
(48)

where \(W\) is a Gaussian white noise with variance \(\sigma ^2\). The \(\arg \min \) can be taken over various kernel spaces, here we consider the set of kernels obtained by rotating a radial linearly interpolated profile, that is,

$$\begin{aligned} \forall \varvec{\xi }\in \varOmega , \quad \hat{k}(\varvec{\xi }) = r( \lfloor |\varvec{\xi }| \rfloor ) ( \lceil |\varvec{\xi }| \rceil - |\varvec{\xi }| ) + r (\lceil |\varvec{\xi }| \rceil ) ( |\varvec{\xi }| - \lfloor |\varvec{\xi }| \rfloor ), \end{aligned}$$

where \((r(0),\ldots ,r(d-1)) \in {\mathbb {R}}^{d}\),

$$\begin{aligned} | \varvec{\xi }|^2 = 2(d-1)^2 \left( \frac{\xi _1^2}{M^2} + \frac{\xi _2^2}{N^2} \right) = \frac{2(d-1)^2}{4\pi ^2} \Vert \varvec{\xi }\Vert ^2\; , \end{aligned}$$

and \(\lfloor t \rfloor \) and \(\lceil t \rceil \) denote, respectively, the lower and upper integer part of \(t\in {\mathbb {R}}\) (we also set \(\hat{k}(\varvec{\xi }) = 0\) when \(|\varvec{\xi }| > d-1\)). This interpolation formula naturally involves the disjoint subsets

$$\begin{aligned} \widehat{\varOmega }_l = \left\{ \varvec{\xi }\in \varOmega , l\le |\varvec{\xi }|<l+1\right\} . \end{aligned}$$
(49)

Since \(W\) is a white Gaussian noise, the cost function of (48) can be written as

$$\begin{aligned}&\Vert u_0 - k*\kappa *u_0 \Vert _2^2 + \sigma ^2 MN \Vert k \Vert _2^2 \nonumber \\&\quad = \frac{1}{MN} \sum _{\varvec{\xi }\in \varOmega } | \widehat{u_0}(\varvec{\xi })|^2 | 1 - \hat{k}(\varvec{\xi })\hat{\kappa }(\varvec{\xi }) |^2 + \sigma ^2 MN |\hat{k}(\varvec{\xi })|^2, \end{aligned}$$
(50)

which, when \(\hat{k}\) is radial and when \(\kappa \) is supposed to be symmetrical, transforms into

$$\begin{aligned}&\frac{1}{MN} \sum _{l=0}^{d-1} \sum _{\varvec{\xi }\in \widehat{\varOmega }_l} |\widehat{u_0}(\varvec{\xi })|^2\\&\qquad \times \, \Bigg ( 1 - \kappa (\varvec{\xi }) r(l)(l+1-|\varvec{\xi }|)- \kappa (\varvec{\xi }) r(l+1)(|\varvec{\xi }|-l) \Bigg )^2 \\&\qquad + \sigma ^2 MN \Bigg ( r(l) (l+1-|\varvec{\xi }|) + r(l+1) (|\varvec{\xi }|-l) \Bigg )^2. \end{aligned}$$

This is a quadratic function in \(r\), and its unique minimum is characterized by the vanishing-gradient condition, which can be written as \(Ar = b\), where \(A=((a_{k,l}))_{0\le k,l\le d-1}\) and \(b=(b_l)_{0\le l\le d-1}\) are defined by

$$\begin{aligned} a_{l,l}= & {} \sum _{\varvec{\xi }\in \widehat{\varOmega }_l} (l+1-|\varvec{\xi }|)^2 ( |\kappa (\varvec{\xi })|^2 |\widehat{u_0}(\varvec{\xi })|^2 + \sigma ^2 MN) \\&+ \sum _{\varvec{\xi }\in \widehat{\varOmega }_{l-1}}( |\varvec{\xi }| - l +1)^2 ( |\kappa (\varvec{\xi })|^2 |\widehat{u_0}(\varvec{\xi })|^2 + \sigma ^2 MN ) \\ a_{l,l+1}= & {} \sum _{\varvec{\xi }\in \widehat{\varOmega }_l} (l+1-|\varvec{\xi }|)(|\varvec{\xi }|-l) ( |\kappa (\varvec{\xi })|^2 | \widehat{u_0}(\varvec{\xi })|^2 + \sigma ^2 MN ) \\ a_{l,l-1}= & {} \sum _{\varvec{\xi }\in \widehat{\varOmega }_{l-1}} (|\varvec{\xi }| - l +1)(l-|\varvec{\xi }|) ( |\kappa (\varvec{\xi })|^2 | \widehat{u_0}(\varvec{\xi })|^2 + \sigma ^2 MN ) \\ a_{l,m}= & {} 0 \quad \hbox {for} \quad |l-m|>1 \\ b_l= & {} \sum _{\varvec{\xi }\in \widehat{\varOmega }_l} (t+1-|\varvec{\xi }|)^2 ( |\kappa (\varvec{\xi })| |\widehat{u_0}(\varvec{\xi })|^2) \\&+ \sum _{\varvec{\xi }\in \widehat{\varOmega }_{l-1}}\!\! ( |\varvec{\xi }| - l +1)^2 ( |\kappa (\varvec{\xi })| |\widehat{u_0}(\varvec{\xi })|^2 ). \end{aligned}$$

This linear system associated to the tridiagonal matrix \(A\) can be solved with standard numerical techniques. The solution is the oracle radial profile \(r_0\), from which the DFT of the oracle kernel \(k_0\) can be defined by

$$\begin{aligned} \forall l, \forall \varvec{\xi }\in \widehat{\varOmega }_l, \quad \widehat{k_0}(\varvec{\xi }) = r_0( l ) ( l+1 - |\varvec{\xi }| ) + r_0 (l+1) ( |\varvec{\xi }| - l). \end{aligned}$$

Remark

One can also consider the minimization problem (48) on the set of all kernels \(k\). It is easy to deduce from (50) that the corresponding oracle kernel is given in Fourier domain by

$$\begin{aligned} \forall \varvec{\xi }\in \varOmega , \quad \hat{k}(\varvec{\xi }) = \frac{ \hat{\kappa }(\varvec{\xi })^* \; |\widehat{u_0}(\varvec{\xi })|^2}{ |\kappa (\varvec{\xi })|^2 |\widehat{u_0}(\varvec{\xi })|^2 + \sigma ^2 MN }. \end{aligned}$$

One can notice that, making the assumption \(|\hat{u}_0(\varvec{\xi })|^2 = c\Vert \varvec{\xi }\Vert ^{-2}\) (instead of \(|\hat{u}(\varvec{\xi })|^2 = c|\varvec{\xi }|^{-2}\)) (see the discussion at the end of Appendix A), and setting \(\lambda = \sigma ^2 MN/c\), the corresponding filter is exactly the one that optimizes the criterion (26).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Leclaire, A., Moisan, L. No-Reference Image Quality Assessment and Blind Deblurring with Sharpness Metrics Exploiting Fourier Phase Information. J Math Imaging Vis 52, 145–172 (2015). https://doi.org/10.1007/s10851-015-0560-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-015-0560-5

Keywords

Navigation