Skip to main content

Underwater Image Enhancement Using Stacked Generative Adversarial Networks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11166))

Abstract

This paper addresses the problem of jointly haze detection and color correction from a single underwater image. We present a framework based on stacked conditional Generative adversarial networks (GAN) to learn the mapping between the underwater images and the air images in an end-to-end fashion. The proposed architecture can be divided into two components, i.e., haze detection sub-network and color correction sub-network, each with a generator and a discriminator. Specifically, a underwater image is fed into the first generator to produce a hazing detection mask. Then, the underwater image along with the predicted mask go through the second generator to correct the color of the underwater image. Experimental results show the advantages of our proposed method over several state-of-the-art methods on publicly available synthetic and real underwater datasets.

This work was supported by National Natural Science Foundation of China (NSFC) under Grant 61702078, 61772106, and by the Fundamental Research Funds for the Central Universities.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Note that, Gray World algorithm aims at correcting the color, while Non-local Image Dehazing can be recast as a post processing to deblur the corrected image. The combination of both algorithms can achieve a relatively high performance.

  2. 2.

    There is no in-air ground truth for comparison, so we just show the visual comparison, and give some explanations.

References

  1. Iqbal, K., Salam, R.A., Osman, M., Talib, A.Z., et al.: Underwater image enhancement using an integrated colour model. IAENG Int. J. Comput. Sci. 32(2), 239–244 (2007)

    Google Scholar 

  2. Provenzi, E., Fierro, M., Rizzi, A.: A spatially variant white-patch and gray-world method for color image enhancement driven by local contrast. IEEE Trans. Pattern Anal. Mach. Intell. 30(10), 1757–1770 (2008)

    Article  Google Scholar 

  3. Ancuti, C., Ancuti, C.O., Haber, T., Bekaert, P.: Enhancing underwater images and videos by fusion. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 81–88. IEEE (2012)

    Google Scholar 

  4. Li, C.-Y., Guo, J.-C., Cong, R.-M., Pang, Y.-W., Wang, B.: Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 25(12), 56645677 (2016)

    MathSciNet  Google Scholar 

  5. Schechner, Y.Y., Averbuch, Y.: Regularized image recovery in scattering media. IEEE Trans. Pattern Anal. Mach. Intell. 29(9) (2007)

    Article  Google Scholar 

  6. Chiang, J.Y., Chen, Y.-C.: Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans. Image Process. 21(4), 1756–1769 (2012)

    Article  MathSciNet  Google Scholar 

  7. Zhang, S., Zhang, J., Fang, S., Cao, Y.: Underwater stereo image enhancement using a new physical model. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 5422–5426. IEEE (2014)

    Google Scholar 

  8. Berman, D., Treibitz, T., Avidan, S.: Non-local image dehazing. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27–30 June 2016 (2016)

    Google Scholar 

  9. Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 18–22 July 2018 (2018)

    Google Scholar 

  10. Shin, Y.S., Cho, Y., Pandey, G., et al.: Estimation of ambient light and transmission map with common convolutional architecture. In: Oceans, pp. 1–7. IEEE (2016)

    Google Scholar 

  11. Wang, Y., Zhang, J., Cao, Y., et al.: A deep CNN method for underwater image enhancement. In: IEEE International Conference on Image Processing, pp. 1382–1386. IEEE (2017)

    Google Scholar 

  12. Li, J., Skinner, K.A., Eustice, R.M., Johnson-Roberson, M.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3, 387–394 (2018)

    Google Scholar 

  13. Goodfellow, I.J., et al.: Generative adversarial networks (2014)

    Google Scholar 

  14. Li, C., Guo, J., Guo, C.: Emerging from water: underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. PP(99), 1 (2017)

    Google Scholar 

  15. Chen, X., Yu, J., Kong, S., et al.: Towards quality advancement of underwater machine vision with generative adversarial networks (2018)

    Google Scholar 

  16. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  17. Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33715-4_54

    Chapter  Google Scholar 

  18. Chen, Q., Xu, J., Koltun, V.: Fast image processing with fully-convolutional networks. In: ICCV (2017)

    Google Scholar 

  19. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)

    Google Scholar 

  20. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv:1611.07004

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinchen Ye .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ye, X., Xu, H., Ji, X., Xu, R. (2018). Underwater Image Enhancement Using Stacked Generative Adversarial Networks. In: Hong, R., Cheng, WH., Yamasaki, T., Wang, M., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2018. PCM 2018. Lecture Notes in Computer Science(), vol 11166. Springer, Cham. https://doi.org/10.1007/978-3-030-00764-5_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00764-5_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00763-8

  • Online ISBN: 978-3-030-00764-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics