计算机科学 ›› 2023, Vol. 50 ›› Issue (6A): 220600182-6.doi: 10.11896/jsjkx.220600182
金亚辉1, 蒋慕蓉1, 李福海1, 杨磊2, 谌俊毅2
JIN Yahui1, JIANG Murong1, LI Fuhai1, YANG Lei2, CHEN Junyi2
摘要: 太阳斑点图具有对比度较低、米粒结构相似、帧间差异较小的特点,现有重建网络在进行单帧去模糊时存在高频特征不足、局部细节难以恢复等问题。结合图像多聚焦融合,构建梯度增强与FPN双阶段网络实现太阳斑点图的高分辨率重建。首先,利用序列图像帧间相似信息互补特性,使用块聚焦图像融合算法,弥补图像丢失的高频细节;其次,以生成对抗网络GAN为框架,设计了一个双阶段重建网络DSGEF,联合梯度分支与结构特征分支增强高频细节,再利用FPN网络进行多尺度特征重建,改善米粒边缘清晰度;最后,引入一个包含对抗损失、像素损失和感知损失的联合损失函数,用于引导网络DSGEF进行训练,实现高分辨率太阳斑点图的重建。实验结果表明,该方法与现有深度学习方法相比,峰值信噪比(PSNR)和结构相似性(SSIM)指标均有明显提高,能够满足太阳观测图像高分辨率重建要求。
中图分类号:
[1]CUI W H,JIANG M R,YANG L,et al.Combining McycleGAN and RFCNN to achieve high resolution reconstruction of solar speckle image[J].Computer Science,2021,48(6A):38-42. [2]KUPYN O,MARTYNIUK T,WU J,et al.Deblurgan-v2:de-blurring(orders-of-magnitude) faster and better[C]//Procee-dings of the IEEE/CVF International Conference on Computer Vision.New York:IEEE Press,2019:8878-8887. [3]NAH S,HYNU K T,MU L K.Deep multi-scale convolutional neural network for dynamic scene deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.New York:IEEE Press,2017:3883-3891. [4]TAO X,GAO H,SHEN X,et al.Scale-recurrent network fordeep image deblurring[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New York:IEEE Press,2018:8174-8182. [5]LI F H,JANG M R,YANG L,et al.Gradient Guided Deblurring of Sun Speckle Images Based on Generative Adversarial Networks[J].Computer Applications,2021,41(11):3345-3352. [6]REN Y H.Research on single-frame sun spot image reconstruction algorithm based on generative adversarial network[D].Yunnan:Yunnan University,2020. [7]ZHU J Y,PARK T,ISOLA P,et al.Unpaired image to image translation using cycle consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision.New York:IEEE Press,2017:2223-2232. [8]LIN T Y,DOLLAR P,GIRSHICK R,et al.Feature pyramidnetworks for object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.New York:IEEE Press,2017:2117-2125. [9]RAHMAN Z,JOBSON D J,WOODELL G A.Multiscale retinex for color image enhancement[C]//Proceedings of 3rd IEEE International Conference on Image Processing.New York:IEEE Press,1996:1003-1006. [10]ZHANG J X,YANG H.Low-light color image enhancementbased on homomorphic high-low-pass filtering and multi-scale retinex[J].Computer Applications and Software,2021,38(1):678-684. [11]HAGHIGHAT M B A,AGHAGOLZADEH A,SEYEDARABI H.Multi-focus image fusion for visual sensor networks in DCT domain[J].Computers & Electrical Engineering,2011,37(5):789-797. [12]WANG X,YU K,WU S,et al.Esrgan:Enhanced super resolution generative adversarial networks[C]//Proceedings of the European Conference on Computer Vision(ECCV) Workshops.Germany:Springer Press,2018:984-990. [13]ZHANG H,XU H,XIAO Y,et al.Rethinking the image fusion:a fast unified image fusion network based on proportional maintenance of gradient and intensity[C]//Proceedings of the AAAI Conference on Artificial Intelligence.New York:AAAI press,2020,12797-12804. [14]JOLICOEU A.The relativistic discriminator:a key elementmissing from standard GAN.[EB/OL].(2018-09-10)[2021-12-10].https://arxiv.org/pdf/1807.00734.pdf. [15]JOHNSON J,ALAHI A,FEI L.Perceptual losses for real time style transfer and super resolution[C]//European Conference on Computer Vision.Germany:Springer Press,2016:694-711. [16]MA C,RAO Y,CHENG Y,et al.Structure preserving super resolution with gradient guidance[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New York:IEEE Press,2020:7769-7778. [17]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition.[EB/OL].(2015-04-10)[2021-12-10].https://arxiv.org/pdf/1409.1556.pdf. [18]XIANG Y Y.Research on high-resolution high speed recon-struction algorithm of the sun[D].Beijing:University of Chinese Academy of Sciences.2016. [19]ZHANG K,LUO W,ZHONG Y,et al.Deblurring by real-isticblurring[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New York:IEEE Press,2020:2737-2746. |
|