skip to main content
10.1145/3563737.3563759acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicbipConference Proceedingsconference-collections
research-article

MRI Super-Resolution using Implicit Neural Representation with Frequency Domain Enhancement

Published:21 November 2022Publication History

ABSTRACT

High resolution (HR) Magnetic Resonance Imaging (MRI) is a popular diagnostic tool, which provides detail structural information and rich textures, benefiting accurate diagnosis and disease detection. However, obtaining HR MRI remains a challenge due to longer scan time and lower peak signal-to-noise ratio (PSNR). Recently, Single Image Super-Resolution (SISR) has generated interest, which shows promising ability for recovering an HR image only relies on a Low Resolution (LR) image. MR images have some characteristics different with natural images: derived from frequency domain, simpler textures and structural information. However, Most of previous methods treat MR images as same as natural images, they only apply SR methods on natural images to MR images and fail to preserve low-frequency information and capture high-frequency details. In this paper, we mimic the process of an MRI machine produces an MRI in practice and propose an Implicit Neural Representation based module, which enable reconstruct high frequency contents effectively while preserving low frequency contents unchanged. Moreover, vanilla L1 loss cannot reflect the differences for each frequency, to address this problem, we design a frequency loss to disentangle each frequency and calculate the differences respectively. Finally, to further capture high frequency contents, we propose High-Frequency Pixel Loss, which can decouples the HF contents from pixel domain and emphasize the HF differences between SR and HR images. Extensive experiments show the effectiveness of our proposed method in terms of visual quality and PSNR score, which produces sharper edges and clearer details compared to previous works.

References

  1. Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. 2021. Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5855–5864.Google ScholarGoogle ScholarCross RefCross Ref
  2. Yinbo Chen, Sifei Liu, and Xiaolong Wang. 2021. Learning continuous image representation with local implicit image function. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8628–8638.Google ScholarGoogle ScholarCross RefCross Ref
  3. Yuhua Chen, Feng Shi, Anthony G Christodoulou, Yibin Xie, Zhengwei Zhou, and Debiao Li. 2018. Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 91–99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2015. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38, 2(2015), 295–307.Google ScholarGoogle Scholar
  5. Chun-Mei Feng, Huazhu Fu, Shuhao Yuan, and Yong Xu. 2021. Multi-contrast mri super-resolution via a multi-stage integration network. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 140–149.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Chun-Mei Feng, Yunlu Yan, Huazhu Fu, Li Chen, and Yong Xu. 2021. Task Transformer Network for Joint MRI Reconstruction and Super-Resolution. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI).Google ScholarGoogle Scholar
  7. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems 27 (2014).Google ScholarGoogle Scholar
  8. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.Google ScholarGoogle ScholarCross RefCross Ref
  9. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33 (2020), 6840–6851.Google ScholarGoogle Scholar
  10. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4700–4708.Google ScholarGoogle ScholarCross RefCross Ref
  11. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision. Springer, 694–711.Google ScholarGoogle ScholarCross RefCross Ref
  12. Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2021. Alias-free generative adversarial networks. Advances in Neural Information Processing Systems 34 (2021).Google ScholarGoogle Scholar
  13. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and Improving the Image Quality of StyleGAN. In Proc. CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  14. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1646–1654.Google ScholarGoogle ScholarCross RefCross Ref
  15. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980(2014).Google ScholarGoogle Scholar
  16. Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907(2016).Google ScholarGoogle Scholar
  17. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4681–4690.Google ScholarGoogle ScholarCross RefCross Ref
  18. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 136–144.Google ScholarGoogle ScholarCross RefCross Ref
  19. Zhisheng Lu, Hong Liu, Juncheng Li, and Linlin Zhang. 2021. Efficient transformer for single image super-resolution. arXiv preprint arXiv:2108.11084(2021).Google ScholarGoogle Scholar
  20. Razvan V Marinescu, Daniel Moyer, and Polina Golland. 2020. Bayesian image reconstruction using deep generative models. arXiv preprint arXiv:2012.04567(2020).Google ScholarGoogle Scholar
  21. Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, and Cynthia Rudin. 2020. Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition. 2437–2445.Google ScholarGoogle ScholarCross RefCross Ref
  22. Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2018. Occupancy Networks: Learning 3D Reconstruction in Function Space. https://doi.org/10.48550/ARXIV.1812.03828Google ScholarGoogle Scholar
  23. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision. Springer, 405–421.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Michael Niemeyer and Andreas Geiger. 2021. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11453–11464.Google ScholarGoogle ScholarCross RefCross Ref
  25. Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, and Andreas Geiger. 2019. Texture fields: Learning texture representations in function space. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4531–4540.Google ScholarGoogle ScholarCross RefCross Ref
  26. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 165–174.Google ScholarGoogle ScholarCross RefCross Ref
  27. Chi-Hieu Pham, Aurélien Ducournau, Ronan Fablet, and François Rousseau. 2017. Brain MRI super-resolution using deep 3D convolutional networks. In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). IEEE, 197–200.Google ScholarGoogle ScholarCross RefCross Ref
  28. Esben Plenge, Dirk Poot, Monique Bernsen, Gyula Kotek, Gavin Houston, Piotr Wielopolski, Louise van der Weerd, W.J. Niessen, and Erik Meijering. 2012. Super‐resolution methods in MRI: Can they improve the trade‐off between resolution, signal‐to‐noise ratio, and acquisition time?Magnetic Resonance in Medicine 68 (12 2012). https://doi.org/10.1002/mrm.24187Google ScholarGoogle Scholar
  29. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. 2019. On the spectral bias of neural networks. In International Conference on Machine Learning. PMLR, 5301–5310.Google ScholarGoogle Scholar
  30. Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. 2021. Image super-resolution via iterative refinement. arXiv preprint arXiv:2104.07636(2021).Google ScholarGoogle Scholar
  31. Edgar Schonfeld, Bernt Schiele, and Anna Khoreva. 2020. A u-net based discriminator for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8207–8216.Google ScholarGoogle ScholarCross RefCross Ref
  32. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. 2020. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems 33 (2020), 7462–7473.Google ScholarGoogle Scholar
  33. Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems 33 (2020), 7537–7547.Google ScholarGoogle Scholar
  34. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  35. Jiancong Wang, Yuhua Chen, Yifan Wu, Jianbo Shi, and James Gee. 2020. Enhanced generative adversarial network for 3D brain MRI super-resolution. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 3627–3636.Google ScholarGoogle ScholarCross RefCross Ref
  36. Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. 2018. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops. 0–0.Google ScholarGoogle Scholar
  37. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600–612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Qing Wu, Yuwei Li, Yawen Sun, Yan Zhou, Hongjiang Wei, Jingyi Yu, and Yuyao Zhang. 2021. An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation. arXiv preprint arXiv:2110.14476(2021).Google ScholarGoogle Scholar
  39. Rui Xu, Xintao Wang, Kai Chen, Bolei Zhou, and Chen Change Loy. 2021. Positional encoding as spatial inductive bias in gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13569–13578.Google ScholarGoogle ScholarCross RefCross Ref
  40. Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. 2020. Learning texture transformer network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5791–5800.Google ScholarGoogle ScholarCross RefCross Ref
  41. Wenlong Zhang, Yihao Liu, Chao Dong, and Yu Qiao. 2019. RankSRGAN: Generative Adversarial Networks with Ranker for Image Super-Resolution. CoRR abs/1908.06382(2019). arXiv:1908.06382http://arxiv.org/abs/1908.06382Google ScholarGoogle Scholar
  42. Yulun Zhang, Kai Li, Kunpeng Li, and Yun Fu. 2021. MR Image Super-Resolution with Squeeze and Excitation Reasoning Attention Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13425–13434.Google ScholarGoogle ScholarCross RefCross Ref
  43. Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. 2018. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV). 286–301.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. 2018. Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2472–2481.Google ScholarGoogle ScholarCross RefCross Ref
  45. Zhifei Zhang, Zhaowen Wang, Zhe Lin, and Hairong Qi. 2019. Image super-resolution by neural texture transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7982–7991.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. MRI Super-Resolution using Implicit Neural Representation with Frequency Domain Enhancement

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICBIP '22: Proceedings of the 7th International Conference on Biomedical Signal and Image Processing
      August 2022
      139 pages
      ISBN:9781450396691
      DOI:10.1145/3563737

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 November 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)186
      • Downloads (Last 6 weeks)23

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format