Skip to main content
Log in

Multistage supervised contrastive learning for hybrid-degraded image restoration

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Natural image degradation is frequently unavoidable for various reasons, including noise, blur, compression artifacts, haze, and raindrops. The majority of previous works have advanced significantly. They, however, consider only one type of degradation and overlook hybrid degradation factors, which are fairly common in natural images. To tackle this challenge, we propose a multistage network architecture. It is capable of gradually learning and restoring the hybrid degradation model of the image. The model comprises three stages, with each pair of adjacent stages combining to exchange information between the early and late stages. Meanwhile, we employ a double-pooling channel attention block that combines maximum and average pooling. It is capable of inferring more intricate channel attention and enhancing the network’s representation capability. Then, during the model training step, we introduce contrastive learning. Our method outperforms comparable methods in terms of qualitative scores and visual effects and restores more detailed textures to improve image quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Abdelhamed, A., Lin, S., Brown, M.S.: A high-quality denoising dataset for smartphone cameras. In: CVPR, pp. 1692–1700 (2018)

  2. Agustsson, E., Timofte, R.: NTIRE 2017 Challenge on single image super-resolution: dataset and study. In: CVPRW, pp. 1122–1131 (2017)

  3. Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: CVPR(2), pp. 60–65 (2005)

  4. Charbonnier, P., Blanc-Feraud, L., Aubert, G., Barlaud, M.: Two deterministic half-quadratic regularization algorithms for computed imaging. In: ICIP(2), vol. 2, pp. 168–172 (1994)

  5. Chatterjee, P., Milanfar, P.: Clustering-based denoising with locally learned dictionaries. IEEE Trans. Image Process. 18(7), 1438–1451 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: half instance normalization network for image restoration. In: CVPRW, pp. 182–192 (2021)

  7. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  8. Franzen, R.: Kodak lossless true color image suite. Source: http://r0k.us/graphics/kodak

  9. Fu, B., Dong, Y., Fu, S., Mao, Y., Thanh, D.N.H.: Learning domain transfer for unsupervised magnetic resonance imaging restoration and edge enhancement. Int. J. Imaging Syst. Technol. 32(1), 144–154 (2022)

    Article  Google Scholar 

  10. Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: CVPR(2), vol. 2, pp. 1735–1742 (2006)

  11. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)

    Article  Google Scholar 

  12. Horé, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: ICPR, pp. 2366–2369 (2010)

  13. Huang, J.-B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: CVPR, pp. 5197–5206 (2015)

  14. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Gool, L.V.: DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks, pp. 3297–3305. IEEE Computer Society, Washington (2017)

    Google Scholar 

  15. Irfan, M., Zheng, J., Iqbal, M., Masood, Z., Arif, M.H.: Knowledge extraction and retention based continual learning by using convolutional autoencoder-based learning classifier system. Inf. Sci. 591, 287–305 (2022)

    Article  Google Scholar 

  16. Irfan, M., Zheng, J., Iqbal, M., Masood, Z., Arif, M.H., ul Hassan, S.R.: Brain inspired lifelong learning model based on neural based learning classifier system for underwater data classification. Expert Syst. Appl. 186, 115798 (2021)

    Article  Google Scholar 

  17. Jiang, K., Wang, Z., Yi, P., Chen, C., Huang, B., Luo, Y., Ma, J., Jiang, J.: Multi-scale progressive fusion network for single image deraining. In: CVPR, pp. 8343–8352 (2020)

  18. Ke, Y., Chao, D., Liang, L., Chen, C.L.: Crafting a toolchain for image restoration by deep reinforcement learning. In: CVPR, pp. 2443–2452 (2018)

  19. Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. In: NeurIPS (2020)

  20. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: CVPR (2016)

  21. Kim, K.I., Kwon, Y.: Single-image super-resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. Mach. Intell. 32(6), 1127–1133 (2010)

    Article  Google Scholar 

  22. Li, X., Wu, J., Lin, Z., Liu, H., Zha, H.: Recurrent squeeze-and-excitation context aggregation net for single image deraining. In: ECCV(7), pp. 262–277 (2018)

  23. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV, vol. 2, pp. 416–423 (2001)

  24. Mittal, A., Moorthy, A.K., Bovik, A.C.: Blind/referenceless image spatial quality evaluator. In: ACSCC, pp. 723–727 (2011)

  25. Moorthy, A.K., Bovik, A.C.: A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 17(5), 513–516 (2010)

    Article  Google Scholar 

  26. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., Devito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in PyTorch (2017)

  27. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: MICCA(3), pp. 234–241 (2015)

  28. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

  29. Suganuma, M., Liu, X., Okatani, T.: Attention-based adaptive selection of operations for image restoration in the presence of unknown combined distortions. In: CVPR, pp. 9039–9048 (2019)

  30. Tao, X., Gao, H., Wang, Y., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: CVPR, pp. 8174–8182 (2018)

  31. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: CBAM: convolutional block attention module. In: ECCV(7), pp. 3–19 (2018)

  32. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: CVPR, pp. 14816–14826 (2021)

  33. Zamir, S.W., Arora, A., Khan, S.H., Hayat, M., Khan, F.S., Yang, M., Shao, L.: Learning enriched features for real image restoration and enhancement. In: ECCV(25), vol. 12370, pp. 492–511 (2020)

  34. Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018)

    Article  MathSciNet  Google Scholar 

  35. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV(7), vol. 11211, pp. 294–310 (2018)

  36. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 43(7), 2480–2495 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the General project of Liaoning Provincial Department of Education, China, No. LJKZ0986; Postdoctoral Science Foundation, No. 2019M651123; Science and Technology Innovation Fund (Youth Science and Technology Star) of Dalian, China, No. 2018RQ65; receiver: Dr. Bo Fu. This work is supported by the National Natural Science Foundation of China (NSFC) Grant No.61976109, China; Liaoning Provincial Key Laboratory Special Fund; Dalian Key Laboratory Special Fund. Dr. Yonggong Ren. This research was funded by the University of Economics Ho Chi Minh City, Vietnam. Fund receiver: Dr. Dang Ngoc Hoang Thanh.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yonggong Ren or Dang N. H. Thanh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fu, B., Dong, Y., Fu, S. et al. Multistage supervised contrastive learning for hybrid-degraded image restoration. SIViP 17, 573–581 (2023). https://doi.org/10.1007/s11760-022-02262-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02262-8

Keywords

Navigation