Research article

HPCDNet: Hybrid position coding and dual-frquency domain transform network for low-light image enhancement

  • Received: 25 September 2023 Revised: 20 December 2023 Accepted: 25 December 2023 Published: 05 January 2024
  • Low-light image enhancement (LLIE) improves lighting to obtain natural normal-light images from images captured under poor illumination. However, existing LLIE methods do not effectively utilize positional and frequency domain image information. To address this limitation, we proposed an end-to-end low-light image enhancement network called HPCDNet. HPCDNet uniquely integrates a hybrid positional coding technique into the self-attention mechanism by appending hybrid positional codes to the query and key, which better retains spatial positional information in the image. The hybrid positional coding can adaptively emphasize important local structures to improve modeling of spatial dependencies within low-light images. Meanwhile, frequency domain image information lost under low-light is recovered via discrete wavelet and cosine transforms. The resulting two frequency domain feature types are weighted and merged using a dual-attention module. More effective use of frequency domain information enhances the network's ability to recreate details, improving visual quality of enhanced low-light images. Experiments demonstrated that our approach can heighten visibility, contrast and color properties of low-light images while better preserving details and textures than previous techniques.

    Citation: Mingju Chen, Hongyang Li, Hongming Peng, Xingzhong Xiong, Ning Long. HPCDNet: Hybrid position coding and dual-frquency domain transform network for low-light image enhancement[J]. Mathematical Biosciences and Engineering, 2024, 21(2): 1917-1937. doi: 10.3934/mbe.2024085

    Related Papers:

  • Low-light image enhancement (LLIE) improves lighting to obtain natural normal-light images from images captured under poor illumination. However, existing LLIE methods do not effectively utilize positional and frequency domain image information. To address this limitation, we proposed an end-to-end low-light image enhancement network called HPCDNet. HPCDNet uniquely integrates a hybrid positional coding technique into the self-attention mechanism by appending hybrid positional codes to the query and key, which better retains spatial positional information in the image. The hybrid positional coding can adaptively emphasize important local structures to improve modeling of spatial dependencies within low-light images. Meanwhile, frequency domain image information lost under low-light is recovered via discrete wavelet and cosine transforms. The resulting two frequency domain feature types are weighted and merged using a dual-attention module. More effective use of frequency domain information enhances the network's ability to recreate details, improving visual quality of enhanced low-light images. Experiments demonstrated that our approach can heighten visibility, contrast and color properties of low-light images while better preserving details and textures than previous techniques.



    加载中


    [1] M. Chen, Z. Lan, Z. Duan, S. Yi, Q. Su, HDS-YOLOv5: An improved safety harness hook detection algorithm based on YOLOv5s, Math. Biosci. Eng., 20 (2023), 15476–15495. https://doi.org/10.3934/mbe.2023691 doi: 10.3934/mbe.2023691
    [2] Y. Wei, Z. Zhang, Y. Wang, M. Xu, Y. Yang, S. Yan, et al., Deraincyclegan: Rain attentive cyclegan for single image deraining and rainmaking, IEEE Trans. Image Process., 30 (2021), 4788–4801. https://doi.org/10.1109/TIP.2021.3074804 doi: 10.1109/TIP.2021.3074804
    [3] M. Chen, S. Yi, Z. Lan, Z. Duan, An efficient image deblurring network with a hybrid architecture, Sensors, 23 (2023). https://doi.org/10.3390/s23167260 doi: 10.3390/s23167260
    [4] M. Abdullah-Al-Wadud, M. Kabir, M. A. Dewan, O. Chae, A dynamic histogram equalization for image contrast enhancement, IEEE Trans. Consum. Electron., 53 (2007), 593–600. https://doi.org/10.1109/TCE.2007.381734 doi: 10.1109/TCE.2007.381734
    [5] D. J. Jobson, Z. Rahman, G. A. Woodell, Properties and performance of a center/surround retinex, IEEE Trans. Image Process., 6 (1997), 451–462. https://doi.org/10.1109/83.557356 doi: 10.1109/83.557356
    [6] X. Dong, W. Xu, Z. Miao, L. Ma, C. Zhang, J. Yang, et al., Abandoning the bayer-filter to see in the dark, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 17431–17440. https://doi.org/10.1109/CVPR52688.2022.01691
    [7] C. M. Fan, T. J. Liu, K. H. Liu, Half wavelet attention on M-Net+ for low-light image enhancement, in 2022 IEEE International Conference on Image Processing (ICIP), (2022), 3878–3882. https://doi.org/10.1109/ICIP46576.2022.9897503
    [8] Z. Cui, K. Li, L. Gu, S. Su, P. Gao, Z. Jiang, et al., You only need 90K parameters to adapt light: A light weight transformer for image enhancement and exposure correction, BMVC, 2022 (2022), 238. https://doi.org/10.48550/arXiv.2205.14871 doi: 10.48550/arXiv.2205.14871
    [9] S. Moran, P. Marza, S. McDonagh, S. Parisot, G. Slabaugh, Deeplpf: Deep local parametric filters for image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 12826–12835. https://doi.org/10.1109/CVPR42600.2020.01284
    [10] K. Jiang, Z. Wang, Z. Wang, C. Chen, P. Yi, T. Lu, et al., Degrade is upgrade: Learning degradation for low-light image enhancement, in Proceedings of the AAAI Conference on Artificial Intelligence, 36 (2022), 1078–1086. https://doi.org/10.1609/aaai.v36i1.19992
    [11] W. Yang, S. Wang, Y. Fang, Y. Wang, J. Liu, From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 3063–3072. https://doi.org/10.1109/CVPR42600.2020.00313
    [12] K. Xu, X. Yang, B. Yin, R. W. Lau, Learning to restore low-light images via decomposition-and-enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 2281–2290. https://doi.org/10.1109/CVPR42600.2020.00235
    [13] X. Xu, R. Wang, C. W. Fu, J. Jia, SNR-aware low-light image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 17714–17724. https://doi.org/10.1109/CVPR52688.2022.01719
    [14] C. Wei, W. Wang, W. Yang, J. Liu, Deep retinex decomposition for low-light enhancement, preprint, arXiv: 1808.04560. https://doi.org/10.48550/arXiv.2109.05923
    [15] J. Tan, T. Zhang, L. Zhao, D. Huang, Z. Zhang, Low-light image enhancement with geometrical sparse representation, Appl. Intell., 53 (2022), 1019–1033. https://doi.org/10.1007/s10489-022-04013-1 doi: 10.1007/s10489-022-04013-1
    [16] Y. Wang, R. Wan, W. Yang, H. Li, L. P. Chau, A. Kot, Low-light image enhancement with normalizing flow, in Proceedings of the AAAI Conference on Artificial Intelligence, (2022), 2604–2612. https://doi.org/10.1609/aaai.v36i3.20162
    [17] R. Liu, L. Ma, J. Zhang, X. Fan, Z. Luo, Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2021), 10561–10570. https://doi.org/10.1109/CVPR46437.2021.01042
    [18] W. Yang, W. Wang, H. Huang, S. Wang, J. Liu, Sparse gradient regularized deep retinex network for robust low-light image enhancement, IEEE Trans. Image Process., 30 (2021), 2072–2086. 10.1109/TIP.2021.3050850
    [19] W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, J. Jiang, Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 5901–5910. https://doi.org/10.1109/CVPR52688.2022.00581
    [20] H. Liu, W. Zhang, W. He, Low-light image enhancement based on Retinex theory for beam-splitting prism system, J. Phys. Conf. Ser., 2478 (2023), 062021. https://doi.org/10.1088/1742-6596/2478/6/062021 doi: 10.1088/1742-6596/2478/6/062021
    [21] Z. Zhao, B. Xiong, L. Wang, Q. Ou, L. Yu, F. Kuang, RetinexDIP: A unified deep framework for low-light image enhancement, IEEE Trans. Circuits Syst. Video Technol., 32 (2021), 1076–1088. https://doi.org/10.1109/TCSVT.2021.3073371 doi: 10.1109/TCSVT.2021.3073371
    [22] Y. F. Jiang, X. Y. Gong, D. Liu, Y. Cheng, C. Fang, X. H. Shen, et al., Enlightengan: Deep light enhancement without paired supervision, IEEE Trans. Image Process., 30 (2021), 2340–2349. https://doi.org/10.1109/TIP.2021.3051462 doi: 10.1109/TIP.2021.3051462
    [23] F. Zhang, Y. Shao, Y. Sun, K. Zhu, C. Gao, N. Sang, Unsupervised low-light image enhancement via histogram equalization prior, preprint, arXiv: 2112.01766. https://doi.org/10.48550/arXiv.2112.01766
    [24] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, et al., An image is worth 16 x 16 words: Transformers for image recognition at scale, preprint, arXiv: 2010.11929. https://doi.org/10.48550/arXiv.2010.11929
    [25] W. Xu, L. Zou, Z. Fu, L. Wu, Y. Qi, Two-stage 3D object detection guided by position encoding, Neurocomputing, 501 (2022), 811–821. 10.1016/j.neucom.2022.06.030 doi: 10.1016/j.neucom.2022.06.030
    [26] M. Tiwari, S. S. Lamba, B. Gupta, A software supported image enhancement approach based on DCT and quantile dependent enhancement with a total control on enhancement level: DCT-Quantile, Multimedia Tools Appl., 78 (2019), 16563–16574. https://doi.org/10.1007/s11042-018-7056-4 doi: 10.1007/s11042-018-7056-4
    [27] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, Adv. Neural Inf. Process. Syst., 2017 (2017), 30.
    [28] Y. Wu, C. Pan, G. Wang, Y. Yang, J. Wei, C. Li, et al., Learning semantic-aware knowledge guidance for low-light image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2023) 1662–1671.
    [29] P. Shaw, J. Uszkoreit, A. Vaswani, Self-attention with relative position representations, preprint, arXiv: 1803.02155. https://doi.org/10.48550/arXiv.1803.02155
    [30] T. Wang, K. Zhang, T. Shen, W. Luo, B. Stenger, T. Lu, Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method, in Proceedings of the AAAI Conference on Artificial Intelligence, (2023), 2654–2662. https://doi.org/10.1609/aaai.v37i3.25364
    [31] Z. Zhang, Y. Wei, H. Zhang, Y. Yang, S. Yan, M. Wang, Data-driven single image deraining: A comprehensive review and new perspectives, Pattern Recognit., 2023 (2023), 109740. https://doi.org/10.1016/j.patcog.2023.109740 doi: 10.1016/j.patcog.2023.109740
    [32] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M. H. Yang, Restormer: Efficient transformer for high-resolution image restoration, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 5728–5739. https://doi.org/10.1109/CVPR52688.2022.00564
    [33] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M. H. Yang, et al., Learning enriched features for fast image restoration and enhancement, IEEE Trans. Pattern Anal. Mach. Intell., 45 (2023), 1934–948. https://doi.org/10.1109/TPAMI.2022.3167175 doi: 10.1109/TPAMI.2022.3167175
    [34] K. G. Lore, A. Akintayo, S. Sarkar, LLNet: A deep autoencoder approach to natural low-light image enhancement, Pattern Recognit., 61 (2017), 650–662. https://doi.org/10.1016/j.patcog.2016.06.008 doi: 10.1016/j.patcog.2016.06.008
    [35] Y. Zhang, X. Guo, J. Ma, W. Liu, J. Zhang, Beyond brightening low-light images, Int. J. Comput. Vision, 129 (2021), 1013–1037. https://doi.org/10.1007/s11263-020-01407-x doi: 10.1007/s11263-020-01407-x
    [36] Y. Zhang, J. Zhang, X. Guo, Kindling the darkness: A practical low-light image enhancer, in Proceedings of the 27th ACM International Conference on Multimedia, (2019), 1632–1640. https://doi.org/10.1145/3343031.3350926
    [37] Z. Zhang, H. Zheng, R. Hong, M. Xu, S. Yan, M. Wang, Deep color consistent network for low-light image enhancement, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 1899–1908. https://doi.org/10.1109/CVPR52688.2022.00194
    [38] Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, R. Salakhutdinov, Transformer-xl: Attentive language models beyond a fixed-length context, preprint, arXiv: 1901.02860. https://doi.org/10.48550/arXiv.1901.02860
    [39] Z. Huang, D. Liang, P. Xu, B. Xiang, Improve transformer models with better relative position embeddings, preprint, arXiv: 2009.13658. https://doi.org/10.48550/arXiv.2009.13658
    [40] P. Ramachandran, N. Parmar, A. Vaswani, I. Bello, A. Levskaya, J. Shlens, Stand-alone self-attention in vision models, Adv. Neural Inf. Process. Syst., 2019 (2019), 32.
    [41] H. Wang, Y. Zhu, B. Green, H. Adam, A. Yuille, L. C. Chen, Axial-deeplab: Stand-alone axial-attention for panoptic segmentation, in European Conference on Computer Vision, (2020), 108–126. https://doi.org/10.1007/978-3-030-58548-8_7
    [42] K. Wu, H. Peng, M. Chen, J. Fu, H. Chao, Rethinking and improving relative position encoding for vision transformer, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 10033–10041.
    [43] N. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, A. Ku, et al., Image transformer, in International Conference on Machine Learning: PMLR, (2018), 4055–4064.
    [44] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End-to-end object detection with transformers, in European Conference on Computer Vision, (2020), 213–229. https://doi.org/10.1007/978-3-030-58452-8_13
    [45] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, P. Luo, SegFormer: Simple and efficient design for semantic segmentation with transformers, Adv. Neural Inf. Process. Syst., 34 (2021), 12077–12090.
    [46] D. Hendrycks, K. Gimpel, Gaussian error linear units (gelus), preprint, arXiv: 1606.084150. https://doi.org/10.48550/arXiv.1606.08415
    [47] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M. H. Yang, et al., Cycleisp: Real image restoration via improved data synthesis, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 2696–2705. https://doi.org/10.1109/CVPR42600.2020.00277
    [48] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), 7132–7141.
    [49] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, et al., Residual attention network for image classification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2017), 3156–3164. https://doi.org/10.1109/CVPR.2017.683
    [50] M. Jaderberg, K. Simonyan, A. Zisserman, Spatial transformer networks, Adv. Neural Inf. Process. Syst., 2015 (2015), 28.
    [51] I. Daubechies, Orthonormal bases of compactly supported wavelets, Commun. Pure Appl. Math., 41 (1988), 909–996. https://doi.org/10.1002/cpa.3160410705 doi: 10.1002/cpa.3160410705
    [52] K. R. Rao, P. Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications, Academic press, 2014. https://doi.org/10.1016/c2009-0-22279-3
    [53] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, H. Li, Uformer: A general u-shaped transformer for image restoration, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2022), 17683–17693. https://doi.org/10.1109/CVPR52688.2022.01716
    [54] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556. https://doi.org/10.48550/arXiv.1409.1556
    [55] T. Wang, K. Zhang, Z. Shao, W. Luo, B. Stenger, T. K. Kim, et al., LLDiffusion: Learning degradation representations in diffusion models for low-light image enhancement, preprint, arXiv: 2307.14659. https://doi.org/10.48550/arXiv.2307.14659
    [56] J. Hou, Z. Zhu, J. Hou, H. Liu, H. Zeng, H. Yuan, Global structure-aware diffusion process for low-light image enhancement, preprint, arXiv: 2310.17577. https://doi.org/10.48550/arXiv.2310.17577
    [57] X. Yi, H. Xu, H. Zhang, L. Tang, J. Ma, Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2023), 12302–12311.
    [58] S. Lim, W. Kim, DSLR: Deep stacked Laplacian restorer for low-light image enhancement, IEEE Trans. Multimedia, 23 (2020), 4272–4284. https://doi.org/10.1109/TMM.2020.3039361 doi: 10.1109/TMM.2020.3039361
    [59] Y. Cai, H. Bian, J. Lin, H. Wang, R. Timofte, Y. Zhang, Retinexformer: One-stage Retinex-based transformer for low-light image enhancement, preprint, arXiv: 2303.06705. https://doi.org/10.48550/arXiv.2303.06705
    [60] X. Guo, Y. Li, H. Ling, LIME: Low-light image enhancement via illumination map estimation, IEEE Trans. Image Process., 26 (2016), 982–993. https://doi.org/10.1109/TIP.2016.2639450 doi: 10.1109/TIP.2016.2639450
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(537) PDF downloads(41) Cited by(0)

Article outline

Figures and Tables

Figures(10)  /  Tables(3)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog