Next Article in Journal
K-Nearest Neighbor Estimation of Functional Nonparametric Regression Model under NA Samples
Previous Article in Journal
Cayley Graphs Defined by Systems of Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Truncated Fractional-Order Total Variation for Image Denoising under Cauchy Noise

1
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
2
College of Science, China University of Petroleum, Qingdao 266580, China
*
Authors to whom correspondence should be addressed.
Axioms 2022, 11(3), 101; https://doi.org/10.3390/axioms11030101
Submission received: 24 January 2022 / Revised: 13 February 2022 / Accepted: 16 February 2022 / Published: 25 February 2022

Abstract

:
In recent years, the fractional-order derivative has achieved great success in removing Gaussian noise, impulsive noise, multiplicative noise and so on, but few works have been conducted to remove Cauchy noise. In this paper, we propose a novel nonconvex variational model for removing Cauchy noise based on the truncated fractional-order total variation. The new model can effectively reduce the staircase effect and keep small details or textures while removing Cauchy noise. In order to solve the nonconvex truncated fractional-order total variation regularization model, we propose an efficient alternating minimization method under the framework of the alternating direction multiplier method. Experimental results illustrate the effectiveness of the proposed model, compared to some previous models.

1. Introduction

Image denoising is the most typical problem in image processing, which aims to recover a clean image from a degraded image with noise. In this paper, we focus on the problem of Cauchy noise removal in images. This noise is commonly found in wireless communication systems, synthetic aperture radar (SAR) images, and biomedical images [1,2,3,4]. Let f be the observed noisy image. The image observation model with Cauchy noise is mathematically expressed as
f = u + τ ,
where τ denotes the Cauchy noise. For the Cauchy noise, a different strategy is needed because of its distribution function. A random variable x follows the Cauchy distribution if it has the following probability density function (PDF):
p ( x ) = 1 π γ γ 2 + ( x δ ) 2 ,
where γ > 0 is the scale parameter, and δ > 0 is a localization parameter which corresponds to the median of the distribution and may be assumed to be 0.
In recent years, the problem of Cauchy noise removal has drawn significant attention. Achim et al. [5] applied α -stable PDF to propose a new statistical model for removing Cauchy noise in the complex wavelet domain. In [6], Wan et al. proposed a novel image segmentation technique for the color image corrupted by Cauchy noise. Based on total variation regularization, Sciacchitano et al. [7] first proposed a variational model for restoring degraded images corrupted by Cauchy noise. The model in [7] can be expressed as follows:
min u λ 2 log ( γ 2 + ( u f ) 2 ) , 1 + μ u f 0 F 2 ) + u 1 ,
where λ > 0 is the regularization parameter, μ > 0 is a penalty parameter, · denotes the Frobenius inner product, 1 is an n-by-n matrix of ones, and f 0 is the predenoising result of the median filter operation for noisy image f. In [7], it is proved that the objective function in (1) is strictly convex when 8 μ γ 2 1 , and there is a unique solution. However, this model relies greatly on the solution of the last term, namely, the median filter. In order to overcome the shortcomings of Model (1), Mei et al. [8] proposed the following nonconvex variational model:
min u λ 2 log ( γ 2 + ( u f ) 2 ) , 1 + u 1 .
Due to the nonconvexity and nonsmoothness of Model (2), they proposed a specific alternating direction method of multiplier with convergence guarantees. As is known, although the total variation regularization variational model can preserve sharp edges, it cannot recover the fine details and textures and often produces staircase effects.
In order to overcome of the limitations of total variation regularization, one choice is using high-order total variation as a regularization term, for example, second-order total variation [9,10,11], total generalized variation [12,13,14,15], hybrid high-order total variation [16,17,18], and so on. On the other hand, some fractional-order total variation regularization models [19,20,21] have been proposed for additive and multiplicative noise removal. In all these papers, the fractional-order total variation models can alleviate the staircase effect and preserve sharp edges by choosing the order of derivative properly. Moreover, the fractional-order total variation models can also improve the performance of texture preservation because of “non-local” behavior.
To further improve the performance of the fractional-order variation model in image restoration, recently, Chan and Liang [22,23] proposed a truncated fractional-order total variation model for Gaussian noise removal. Their numerical experiments showed that their proposed truncated fractional-order total variation model performs better for eliminating the staircase effect and preserving textures, compared to the total variation regularization model and fractional-order total variation model. In this paper, we extend the truncated fractional-order total variation regularization to recover Cauchy noise and then propose the following model:
min u λ 2 ( log ( γ 2 + ( u f ) 2 ) , 1 + u u u 0 F 2 + α , t u 1 ,
where α , t u 1 represents the regularization term, which is introduced in Section 2, u0 denotes the image obtained by applying the median filter to the noisy image f. To our knowledge, the proposed model is the first one that uses the truncated fractional-order total variation regularizer for Cauchy noise removal. Although this seems to be a simple generalization, its optimization is more challenging than Gaussian denoising due to the nonconvexity of the data fidelity term. The main contributions of our work are as follows: (1) We propose a novel minimization model for Cauchy noise removal by adopting the truncated fractional-order total variation regularization. (2) The proposed model can suppress staircasing artifacts better and keep more details. (3) Compared with some previous Cauchy noise models, the restored images obtained by the proposed model not only have higher peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM), but also have better visual effects.
The rest of this paper is organized as follows. In Section 2, we give the definitions of the the truncated fractional-order total variation. In Section 3, under the frame of alternating direction method of multipliers, we develop an efficient alternating minimization algorithm to solve the proposed nonconvex Cauchy noise reduction model. In Section 4, numerical comparisons with existing methods are carried out to confirm the effectiveness of our method. Finally, concluding remarks are given in Section 5.

2. Preliminary

In this section, we briefly review the concept of truncated fractional-order total variation.

Truncated Fractional-Order Total Variation

For an image matrix u R n × n , its fractional derivative can be expressed as α u = ( x α u , y α u ) . The discrete fractional-order gradient at pixel ( i , j ) is defined as ( ( x α u ) i , j , ( y α u ) i , j ) , where
( x α u ) i , j = k = 0 K 1 ω k α u i k , j , ( y α u ) i , j = k = 0 K 1 ω k α u i , j k , ω k α = ( 1 ) k Γ ( α + 1 ) Γ ( k + 1 ) Γ ( α k + 1 ) = 1 , k = 0 , ( 1 ) k α ( α 1 ) ( α k + 1 ) k ! , k = 1 , 2 ,
where K is the number of adjacent pixels used to calculate the fractional-order derivative of each pixel. According to the definition of coefficient ω k α , the following recurrence formulas can be obtained:
ω 0 α = 1 ; ω k α = ω k 1 α · ( 1 α + 1 k ) , k = 1 , 2 , .
We use the same truncation strategy as in [22,23] to obtain the new coefficient ω k α , t :
ω k α , t = ω k α , k = 0 , 1 , , K 2 , k = K 1 ω k α , 0 , k = K , K + 1 ,
and then, the truncated fractional-order gradient operators can be defined as follows: α , t u = ( x α , t u , y α , t u ) , where the discrete truncated fractional-order gradients x α , t u , y α , t u at pixel ( i , j ) are defined by
( x α , t u ) i , j = k = 0 K 1 ω k α , t u i k , j , ( y α , t u ) i , j = k = 0 K 1 ω k α , t u i , j k .
Similarly, we also give the divergence operator d i v α , t by
( d i v α , t p ) i , j = k = 0 K 1 ( ω k α , t ) p i + k , j 1 + k = 0 K 1 ( ω k α , t ) p i , j + k 2 .
This divergence satisfies the equation
< α , t u , p > = < u , d i v α , t p > , f o r p = ( p 1 , p 2 ) R n 2 × R n 2 .
Then, the discrete version of the truncated fractional-order total variation can be defined as follows:
α , t u 1 = 1 i , j n | ( α , t u ) i , j | = 1 i , j n ( x α , t u ) i , j 2 + ( y α , t u ) i , j 2 ,
when α = 1 , α , t u 1 and α u 1 are equivalent.

3. Proposed Method

In this section, we present an efficient alternating minimization algorithm to solve the proposed Model (3). By introducing two auxiliary variables v , z , Model (3) can be transformed into the following constrained problem:
min v , z λ 2 log ( γ 2 + ( v f ) 2 ) , 1 + μ v u 0 F 2 + z 1 s . t . v = u , z = α , t u .
To deal with the above constrained optimization problem, we give the corresponding augmented Lagrangian function of (4) as follows:
L ( u , v , z , λ 1 , λ 2 ) = λ 2 log ( γ 2 + ( v f ) 2 ) , 1 + μ v u 0 F 2 + z 1 + β 1 2 v u 2 2 λ 1 T ( v u ) + β 2 2 z α , t u 2 2 λ 2 T ( z α , t u ) ,
where β 1 , β 2 > 0 are penalty parameters and λ 1 , λ 2 are the Lagrangian multipliers. To solve the constrained optimization problem, (4) is equivalent to finding a saddle point of Lagrangian function L ( u , v , z , λ 1 , λ 2 ) . According to the alternating direction method of multipliers, the problem (5) can be solved by the following iteration scheme:
{ u k + 1 = arg min u L ( u , v k , z k , λ 1 k , λ 2 k ) , ( 6 a ) v k + 1 = arg min u L ( u k + 1 , v , z k , λ 1 k , λ 2 k ) , ( 6 b ) z k + 1 = arg min u L ( u k + 1 , v k + 1 , z , λ 1 k , λ 2 k ) , ( 6 c ) λ 1 k + 1 = λ 1 k β 1 ( v k + 1 u k + 1 ) . ( 6 d ) λ 2 k + 1 = λ 2 k β 2 ( z k + 1 α , t u k + 1 ) . ( 6 e )
In the following, we will provide the details to solve (6a)–(6c).
The u subproblem (6b) can be rewritten as
u k + 1 = arg min u β 1 2 v k u λ 1 k β 1 2 2 + β 2 2 z k α , t u λ 2 k β 2 2 2 .
The optimality condition of (7) can be given by the following equation:
β 1 ( u v k + λ 1 k β 1 ) + β 2 ( α , t ) T ( α , t u z k + λ 2 k β 2 ) = 0 .
With some simple computation, the above equation can be rewritten as the following equation:
( β 1 + β 2 ( α , t ) T ( α , t ) ) u = β 1 v k λ 1 k + ( α , t ) T ( β 1 z k λ 2 k ) .
Under the periodic boundary condition of image u, the matrices ( α , t ) T ( α , t ) are block circulant with a circulant block. Therefore, the above equation can be solved efficiently by using fast Fourier transform operation F and inverse fast Fourier operation F 1 :
u k + 1 = F 1 ( F ( β 1 v k λ 1 k + ( α , t ) T ( β 1 z k λ 2 k ) ) F ( β 1 + β 2 ( α , t ) T ( α , t ) ) ) .
The v-subproblems (6a) can be written as:
v k + 1 = arg min v λ 2 log ( γ 2 + ( v f ) 2 ) , 1 + μ v u 0 F 2 + β 1 2 v u λ 1 k β 1 2 2 .
Let
G ( v ) = λ 2 log ( γ 2 + ( v f ) 2 ) , 1 + μ v u 0 F 2 + β 1 2 v u k + 1 λ 1 k β 1 2 2 ,
then we can obtain the following gradient and Hessian matrices of
G ( v ) = λ v f γ 2 + ( v f ) 2 + λ μ ( v u 0 ) + β 1 ( v u k + 1 λ 1 k β 1 ) , G ( v ) = λ γ 2 ( v f ) 2 ( γ 2 + ( v f ) 2 ) 2 + ( β 1 + λ μ ) I .
Then, the solution of the v-subproblems (6b) can be obtained by the following Newton’s method:
v k + 1 , l + 1 = v k + 1 , l G ( v k + 1 , l ) G ( v k + 1 , l ) ,
where v k + 1 , l + 1 represents the result of the ( l + 1 ) -th Newton iteration in the ( k + 1 ) -th outer iteration.
The z-subproblem (6c) can be written as
z k + 1 = arg min z z 1 + β 2 2 z α , t u k + 1 λ 2 k β 2 2 2 .
It is easy to know that its closed-form solution can be given by the following shrinkage formula:
z i , j k + 1 = max { | ( α , t u k + 1 + λ 2 k β 2 ) i , j | 1 β 2 , 0 } · ( α , t u k + 1 + λ 2 k β 2 ) i , j | ( α , t u k + 1 + λ 2 k β 2 ) i , j | .
As a summary, the proposed method to solve the problem (3) is described as follows.
Proposed method for solving Model   ( 3 )
Input :  noise image f, parameters λ , α , K ,
Initialize :   u 0 , v 0 , z 0 , λ 1 0 , λ 2 0 , k = 0 , maximum inner iteration N, ϵ = 10 5
While    k < N or u k + 1 u k F / u k F > ϵ
    1. Update u k + 1 by (8).
    2. Update v k + 1 by (10).
    3. Update z k + 1 by (12).
    4. Update λ 1 k + 1 , λ 2 k + 1 by (6d), (6e).
    5.  k = k + 1 .
End
Output :  Restoring image u.

4. Numerical Experiments

In this section, we present several numerical results to demonstrate the effectiveness of the proposed method. We compare our method with other two well-known methods: the median filter [24] and Mei’s method [8]. All the test images with the size of 256 × 256 are shown in Figure 1. All the numerical experiments are implemented under Windows 10 and Matlab R2018a running on a desktop equipped with 2.60 GHz Inter(R) Core(TM) i5-3230M and 4 GB memory. In our experiments, PSNR and SSIM [25] are used to evaluate the image denoising effect. They are defined as follows:
PSNR = 10 log 10 255 2 x y 2 2 ,
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
where μ x and μ y represent the mean value of images x and y, σ x and σ y are the standard variance of images x and y, σ x y is the covariance of x and y, and C 1 and C 2 are constants.
In this paper, the degraded image contaminated by Cauchy noise can be obtained by the following equation:
f = u + τ = u + η ξ 1 ξ 2 ,
where η denotes noise level, and ξ 1 and ξ 2 follow a Gaussian distribution with a mean of 0 and variance of 1. Empirically, we set η = γ 2 for good experimental performance. To make it easier to compare with different methods, the stopping criterion used in all the methods is set to be
u k + 1 u k F u k F < 10 5 .

4.1. Parameter Discussion

Throughout all the experiments, we set K = 3 to be the same as in [22,23], which is the number of pixels to approximate the fractional derivative. In this subsection, we mainly focus on the selection of parameters λ , μ , and α . The images “Lin”, “Lena” and “Starfish” with the noise level η = 0.02 are used to study the choice of these parameters.
First, we study the selection of the parameters λ , μ in Model (3). In order to find out how the parameter λ impacts the performance of our proposed method, we take the value evenly on the interval [ 0.1 , 2 ] , and the step length is set to 0.1 to test the image recovery quality obtained under different λ ; the remaining parameters are fixed. We use the same strategy to select parameters μ and α . The curves of PSNR and SSIM with respect to λ and μ are shown in Figure 2. It is easy to see that the performance of the proposed method achieves the best results with λ in 0.8 nearby for all the different test images. So, we fix λ = 0.8 in the following experiments. From Figure 2, we can observe that the PSNR and SSIM values are stable when μ > 20 . However, the calculation time increases as the values of μ increase. Then, in the following experiments, we set μ = 20 . Finally, we discuss the selection of the fractional-order parameter α . We carry out experiments to compute the values of PSNR and SSIM with respect to α . From Figure 2, we can see that our proposed Cauchy noise denoiser can achieve the highest PSNR and SSIM values when α = 1.3 .

4.2. Image Denoising

From the above parameter selection discussion, we can find that the fractional order parameter α has an important effect on the quality of the restoration images. In order to further illustrate that the fractional order parameter α we set is reasonable and effective, we fix other parameters and perform numerical experiments, choosing different values of α to examine the denoising effect. We consider the images “Monarch”, “Parrot”, and “Pallons” with a Cauchy noise at η = 0.02 , “Starfish” and “Monarch” with a Cauchy noise at η = 0.04 , respectively. We carry out our proposed method with different α to remove the Cauchy noise. Figure 3 shows the denoising results obtained by our proposed method with different fractional α . We can clearly see that the quality of the restored image is higher when the fractional order parameter α = 1.3 . For these two noise levels, as the fractional parameter α increases, some noise remains in the texture area of the restored image. In Table 1, we show the values of PSNR and SSIM for six different images which are obtained by our proposed method against different values of α and different noise levels. As can be seen, our proposed model can achieve the highest PSNR and SSIM values when α = 1.3 .
In order to illustrate the performance of our proposed method for Cauchy noise removal, we compare it with the median filter [24] and Mei’s method [8]. In the first experiment, we added Cauchy noise with η = 0.02 to the test images. In Figure 4 and Figure 5, we present the restored images by different methods with η = 0.02 and η = 0.04 , respectively. From Figure 4 and Figure 5, it is easy to see that the median filter can effectively remove Cauchy noise, but it causes the staircase effect and oversmooths the edges. Mei’s method can provide better balance between preserving edges and alleviating the staircase effect; however, the restored images by Mei’s method show residual noise. As a comparison, we can find that our proposed model has better performance in eliminating noise and preserving the edges and details. To quantify performance comparisons, Table 2 and Table 3 list the PSNR and SSIM values of the restored images by different methods at η = 0.02 and η = 0.04 , respectively. In comparison with the other two methods, our proposed method can obtain the highest PSNR and SSIM values, which are consistent with the visual comparison.

4.3. Convergence Analysis

In this subsection, we plot the curves of the PSNR and SSIM values versus the iterations number to verify the convergence of our proposed method. In order to carry out this experiment, we use the images “Lin”, “Lena” and “Starfish” degraded by Cauchy noise with η = 0.02 and η = 0.04 , respectively. The PSNR and SSIM curves of our proposed method are presented in Figure 6. It is easy to see that the curves of PSNR and SSIM are flat when the number of iterations exceeds 60.

5. Conclusions

Based on the truncated fractional-order total variation regularization, we propose a new variational model for Cauchy noise removal. Under the framework of alternating direction of the multiplier method, we propose an alternating minimization method to solve the proposed model. Extensive experiments demonstrate the superiority of our proposed model and method over some existing methods.

Author Contributions

J.Z. and H.L. proposed the algorithm and designed the experiments; J.W. and H.L. performed the experiments; H.L. and J.W. wrote original draft, J.Z. and B.H. reviewed and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Major Program of the National Natural Science Foundation of China under Grant 11991024, by National Science Foundation of China under Grant 61976126, by Qindao Postdoctoral Science Foundation, China (2016114), by a Project of Shandong Province Higher Educational Science and Technology Program, China (J17KA166).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting are located in this paper.

Acknowledgments

The authors are grateful to the anonymous referees for their valuable constructive comments and suggestions, which improved the quality of this paper in the present form. The authors would also like to thank Jinjin Mei for sending us the code in [8].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Idan, M.; Speyer, J.L. Cauchy estimation for linear scalar systems. IEEE Trans. Autom. Control 2010, 55, 1329–1342. [Google Scholar] [CrossRef]
  2. Kuruoglu, E.E.; Fitzgerald, W.J.; Rayner, P.J. Near optimal detection of signals in impulsive noise modeled with a symmetric alpha-stable distribution. IEEE Commun. Lett. 1998, 2, 282–284. [Google Scholar] [CrossRef]
  3. Peng, Y.; Chen, J.; Xu, X.; Pu, F. Sar images statistical modeling and classification based on the mixture of alpha-stable distributions. Remote Sens. 2013, 5, 2145–2163. [Google Scholar] [CrossRef] [Green Version]
  4. John, P.N. Numerical calculation of stable densities and distribution functions. Commun. Stat. Stoch. Model. 1997, 13, 759–774. [Google Scholar]
  5. Achim, A.; Kuruolu, E.E. Image denoising using bivariate α-stable distributions in the complex wavelet domain. IEEE Signal Process. Lett. 2005, 12, 17–20. [Google Scholar] [CrossRef]
  6. Wan, T.; Canagarajah, N.; Achim, A. Segmentation of noisy colour images using Cauchy distribution in the complex wavelet domain. IET Image Process. 2011, 5, 159–170. [Google Scholar] [CrossRef]
  7. Sciacchitano, F.; Dong, Y.; Zeng, T. Variational approach for restoring blurred images with cauchy noise. SIAM J. Imaging Sci. 2015, 8, 1894–1922. [Google Scholar] [CrossRef] [Green Version]
  8. Mei, J.; Dong, Y.; Huang, T.; Yin, W. Cauchy noise removal by nonconvex admm with convergence guarantees. J. Sci. Comput. 2018, 74, 743–766. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, P.F.; Xiao, L. Efficient multiplicative noise removal method using isotropic second order total variation. Comput. Math. Appl. 2015, 70, 2029–2048. [Google Scholar] [CrossRef]
  10. Chen, Y.; Huang, T.Z.; Zhao, X.L.; Deng, L.J. Speckle noise removal in ultrasound images by first- and second-order total variation. Numer. Algorithms 2018, 78, 513–533. [Google Scholar]
  11. Tarmizi, A.; Raveendran, P. Hybrid non-convex second-order total variation with applications to non-blind image deblurring. Signal Image Video Process. 2020, 14, 115–123. [Google Scholar]
  12. Liu, X.W. Augmented Lagrangian method for total generalized variation based Poissonian image restoration. Comput. Math. Appl. 2016, 71, 1694–1705. [Google Scholar] [CrossRef]
  13. Gao, Y.; Liu, F.; Xiaoping, Y. Total generalized variation restoration with non-quadratic fidelity. Multidimens. Syst. Signal Process. 2018, 29, 1459–1484. [Google Scholar] [CrossRef]
  14. Shao, W.Z.; Wang, F.; Huang, L.L. Adapting total generalized variation for blind image restoration. Multidimens. Syst. Signal Process. 2019, 30, 857–883. [Google Scholar] [CrossRef]
  15. Jiang, L.; Yin, H. Total generalized variation and wavelet transform for impulsive image restoration. Signal Image Video Process. 2021. [Google Scholar] [CrossRef]
  16. Liang, J.W.; Zhang, X.Q. Retinex by Higher Order Total Variation L1 Decomposition. J. Math. Imaging Vis. 2015, 52, 345–355. [Google Scholar] [CrossRef]
  17. Liu, P. Hybrid higher-order total variation model for multiplicative noise removal. IET Image Process. 2020, 14, 862–873. [Google Scholar] [CrossRef]
  18. Sun, Y.; Lei, L.; Guan, D.; Li, X.; Kuang, G. SAR Image Speckle Reduction Based on Nonconvex Hybrid Total Variation Model. IEEE Trans. Geosci. Remote Sens. 2021, 59, 1231–1249. [Google Scholar] [CrossRef]
  19. Chowdhury, M.R.; Qin, J.; Lou, Y. Non-blind and Blind Deconvolution Under Poisson Noise Using Fractional-Order Total Variation. J. Math. Imaging Vis. 2020, 62, 1238–1255. [Google Scholar] [CrossRef]
  20. Chen, G.; Li, G.; Liu, Y.; Zhang, X.; Zhang, L. SAR Image Despeckling Based on Combination of Fractional-Order Total Variation and Nonlocal Low Rank Regularization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2056–2070. [Google Scholar] [CrossRef]
  21. Yang, X.; Zhang, J.; Liu, Y.; Zheng, X.; Liu, K. Super-resolution image reconstruction using fractional-order total variation and adaptive regularization parameters. Vis. Comput. 2019, 35, 1755–1768. [Google Scholar] [CrossRef]
  22. Chan, R.H.; Liang, H. Truncated fractional-order total variation model for image restoration. J. Oper. Res. Soc. China 2019, 7, 561–578. [Google Scholar] [CrossRef]
  23. Liang, H.; Zhang, J. Dual algorithm for truncated fractional variation based image denoising. Int. J. Comput. Math. 2019, 97, 1–13. [Google Scholar] [CrossRef]
  24. Frieden, B. A new restoring algorithm for the preferential enhancement of edge gradients. J. Opt. Soc. Am. 1976, 66, 116–123. [Google Scholar] [CrossRef]
  25. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Test images.
Figure 1. Test images.
Axioms 11 00101 g001
Figure 2. (af) The values of PSNR and SSIM with respect to the parameters λ , μ , α .
Figure 2. (af) The values of PSNR and SSIM with respect to the parameters λ , μ , α .
Axioms 11 00101 g002
Figure 3. The restored image obtained by our proposed method with different α .
Figure 3. The restored image obtained by our proposed method with different α .
Axioms 11 00101 g003
Figure 4. Denoising results of three methods at η = 0.02 .
Figure 4. Denoising results of three methods at η = 0.02 .
Axioms 11 00101 g004
Figure 5. Denoising results of three methods at η = 0.04 .
Figure 5. Denoising results of three methods at η = 0.04 .
Axioms 11 00101 g005
Figure 6. Top row: Curves of PSNR and SSIM versus iterations number at η = 0.02 . Bottom row: Curves of PSNR and SSIM versus iterations number at η = 0.04 .
Figure 6. Top row: Curves of PSNR and SSIM versus iterations number at η = 0.02 . Bottom row: Curves of PSNR and SSIM versus iterations number at η = 0.04 .
Axioms 11 00101 g006
Table 1. The values of PSNR and SSIM by our proposed method with different fractional order parameter α and different noise level.
Table 1. The values of PSNR and SSIM by our proposed method with different fractional order parameter α and different noise level.
η OrderImageLinLenaStarfishMonarchParrotsPallon
0.02 α = 1 PSNR (dB)32.654030.877729.522330.042333.605231.2158
SSIM0.89790.89040.88510.91480.89490.8952
α = 1.3 PSNR (dB)32.810131.083529.797930.143533.699431.4119
SSIM0.90390.89770.89200.92160.89970.9011
α = 1.5 PSNR (dB)32.739331.046429.747430.038233.646331.3735
SSIM0.90530.89980.89190.92320.90110.9026
α = 1.7 PSNR (dB)32.555830.888429.574129.843533.501131.2102
SSIM0.90390.89760.88890.92250.90050.9015
α = 1.9 PSNR (dB)31.237729.878328.755928.961631.986830.1515
SSIM0.89020.88330.87580.91250.88740.8876
α = 2 PSNR (dB)30.001028.898227.962228.118230.567829.1301
SSIM0.87840.87060.86570.90190.87450.8752
0.04 α = 1 PSNR (dB)30.590428.672927.131027.677129.013831.6858
SSIM0.86540.84280.82380.87560.86220.8697
α = 1.3 PSNR (dB)30.662928.795327.379327.811229.144431.7235
SSIM0.86670.84820.83260.88000.86440.8697
α = 1.5 PSNR(dB)30.098328.442827.113027.298028.727131.0574
SSIM0.85210.83760.82490.87090.85080.8548
α = 1.7 PSNR (dB)27.432626.426925.519725.571226.607227.9338
SSIM0.80640.79980.79390.83810.80880.8097
α = 1.9 PSNR (dB)23.445722.988022.558822.549223.082823.6297
SSIM0.73580.73830.74110.78400.74390.7368
α = 2 PSNR(dB)21.623521.311821.012120.994021.374021.7438
SSIM0.70360.70950.71470.72930.75660.7143
Table 2. PSNR and SSIM values for different algorithms ( η = 0.02 ).
Table 2. PSNR and SSIM values for different algorithms ( η = 0.02 ).
AlgorithmMedian [24]Mei’s Method [8]Our Method
ImagePSNR (dB)SSIMPSNR (dB)SSIMPSNR (dB)SSIM
Lin31.23810.856331.41690.867932.81010.9039
Lena29.69540.864030.34130.882631.08350.8977
Starfish26.27010.863029.26000.878829.79790.8920
Monarch29.43930.892529.71340.904330.14350.9216
Parrot29.74210.859730.63620.876631.41190.9011
Pallon31.99940.853932.87190.878533.69940.8997
Boats29.04690.837630.19420.877330.53850.8818
Elaine32.10070.885232.28040.898833.21660.9078
Table 3. PSNR and SSIM values for different algorithms ( η = 0.04 ).
Table 3. PSNR and SSIM values for different algorithms ( η = 0.04 ).
AlgorithmMedian [24]Mei’s Method [8]Our Method
ImagePSNR (dB)SSIMPSNR (dB)SSIMPSNR (dB)SSIM
Lin28.49780.706429.34420.832630.66290.8667
Lena27.38560.754028.05230.831228.79530.8482
Starfish26.37590.780926.43400.792627.37930.8326
Monarch26.27710.797727.29140.863627.81120.8800
Parrot27.39560.729328.49720.844429.14440.8644
Pallon28.91300.706431.50870.866831.72350.8697
Boats26.89950.739227.56760.800828.25890.8244
Elaine28.86920.777229.77110.849030.84270.8679
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, J.; Wei, J.; Lv, H.; Hao, B. Truncated Fractional-Order Total Variation for Image Denoising under Cauchy Noise. Axioms 2022, 11, 101. https://doi.org/10.3390/axioms11030101

AMA Style

Zhu J, Wei J, Lv H, Hao B. Truncated Fractional-Order Total Variation for Image Denoising under Cauchy Noise. Axioms. 2022; 11(3):101. https://doi.org/10.3390/axioms11030101

Chicago/Turabian Style

Zhu, Jianguang, Juan Wei, Haijun Lv, and Binbin Hao. 2022. "Truncated Fractional-Order Total Variation for Image Denoising under Cauchy Noise" Axioms 11, no. 3: 101. https://doi.org/10.3390/axioms11030101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop