Skip to main content
Log in

Multi-modal medical image fusion using improved dual-channel PCNN

  • Original Article
  • Published:
Medical & Biological Engineering & Computing Aims and scope Submit manuscript

Abstract

This paper proposes a medical image fusion method in the non-subsampled shearlet transform (NSST) domain to combine a gray-scale image with the respective pseudo-color image obtained through different imaging modalities. The proposed method applies a novel improved dual-channel pulse-coupled neural network (IDPCNN) model to fuse the high-pass sub-images, whereas the Prewitt operator is combined with maximum regional energy (MRE) to construct the fused low-pass sub-image. First, the gray-scale image and luminance of the pseudo-color image are decomposed using NSST to find the respective sub-images. Second, the low-pass sub-images are fused by the Prewitt operator and MRE-based rule. Third, the proposed IDPCNN is utilized to get the fused high-pass sub-images from the respective high-pass sub-images. Fourth, the luminance of the fused image is obtained by applying inverse NSST on the fused sub-images, which is combined with the chrominance components of the pseudo-color image to construct the fused image. A total of 28 diverse medical image pairs, 11 existing methods, and nine objective metrics are used in the experiment. Qualitative and quantitative fusion results show that the proposed method is competitive with and even outpaces some of the existing medical fusion approaches. It is also shown that the proposed method efficiently combines two gray-scale images.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

  1. Spencer SS, Theodore WH, Berkovic SF (1995) Clinical applications: MRI, SPECT, and PET. Magn Reson Imaging 13(8):1119–1124

    Article  CAS  PubMed  Google Scholar 

  2. Panigrahy C, Seal A, Gonzalo-Martín C, Pathak P, Jalal AS (2023) Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion. Biomed Signal Process Control 83:104659

  3. Hermessi H, Mourali O, Zagrouba E (2021) Multimodal medical image fusion review: theoretical background and recent advances. Signal Process 183

  4. Zaidi H, Montandon ML, Alavi A (2008) The clinical role of fusion imaging using PET, CT, and MR imaging. PET clinics 3(3):275–291

    Article  PubMed  Google Scholar 

  5. Zhu Z, Zheng M, Qi G, Wang D, Xiang Y (2019) A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access 7:20811–20824

    Article  Google Scholar 

  6. Panigrahy C, Seal A, Mahato NK (2020) Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion. Opt Lasers Eng 133:106141

    Article  Google Scholar 

  7. Panigrahy C, Seal A, Mahato NK (2022) Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing 514:21–38

    Article  Google Scholar 

  8. Zhang S, Liu F (2020) Infrared and visible image fusion based on non-subsampled shearlet transform, regional energy, and co-occurrence filtering. Electron Lett 56(15):761–764

    Article  Google Scholar 

  9. Zhou RG, Yu H, Cheng Y, Li FX (2019) Quantum image edge extraction based on improved Prewitt operator. Quantum Inf Process 18:1–24

    Article  Google Scholar 

  10. Yin M, Liu X, Liu Y, Chen X (2018) Medical image fusion with parameter adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans Instrum Meas 99:1–16

    Google Scholar 

  11. Liu S, Wang J, Lu Y, Li H, Zhao J, Zhu Z (2019) Multi-focus image fusion based on adaptive dual-channel spiking cortical model in non-subsampled shearlet domain. IEEE access 7:56367–56388

    Article  Google Scholar 

  12. Singh S, Anand RS (2019) Multimodal neurological image fusion based on adaptive biological inspired neural model in nonsubsampled shearlet domain. Int J Imaging Syst Technol 29(1):50–64

    Article  CAS  Google Scholar 

  13. Wang Z, Wang S, Guo L (2018) Novel multi-focus image fusion based on PCNN and random walks. Neural Comput Appl 29:1101–1114

    Article  Google Scholar 

  14. Ramlal SD, Sachdeva J, Ahuja CK, Khandelwal N (2018) Multimodal medical image fusion using non-subsampled shearlet transform and pulse coupled neural network incorporated with morphological gradient. SIViP 12:1479–1487

    Article  Google Scholar 

  15. He K, Zhou D, Zhang X, Nie R (2018) Multi-focus: focused region finding and multi-scale transform for image fusion. Neurocomputing 320:157–170

    Article  Google Scholar 

  16. Panigrahy C, Seal A, Mahato N (2020) MRI and SPECT image fusion using a weighted parameter adaptive dual channel PCNN. IEEE Signal Process Lett 27:690–694

    Article  Google Scholar 

  17. Tan W, Thitøn W, Xiang P, Zhou H (2021) Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed Signal Process Control 64

  18. The whole brain atlas (2022). http://www.med.harvard.edu/AANLIB/home.html. Accessed:31-05-2022

  19. Xiao-Bo Q, Jing-Wen Y, Hong-Zhi X, Zi-Qian Z (2008) Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automatica Sinica 34(12):1508–1514

    Article  Google Scholar 

  20. Tan W, Tiwari P, Pandey HM, Moreira C, Jaiswal AK (2020) Multimodal medical image fusion algorithm in the era of big data. Neural Computing and Applications, pages 1–21

  21. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion 24:147–164

    Article  Google Scholar 

  22. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Information Fusion 36:191–207

    Article  Google Scholar 

  23. Sufyan A, Imran M, Shah SA, Shahwani H, Wadood AA (2022) A novel multimodality anatomical image fusion method based on contrast and structure extraction. Int J Imaging Syst Technol 32(1):324–342

    Article  Google Scholar 

  24. Li X, Zhou F, Tan H, Zhang W, Zhao C (2021) Multimodal medical image fusion based on joint bilateral filter and local gradient energy. Inf Sci 569:302–325

    Article  Google Scholar 

  25. Li X, Guo X, Han P, Wang X, Li H, Luo T (2020) Laplacian redecomposition for multimodal medical image fusion. IEEE Trans Instrum Meas 69(9):6880–6890

    Article  Google Scholar 

  26. Piella G, Heijmans H (2003) A new quality metric for image fusion. In Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on, volume 3, pages III–173. IEEE

  27. Xydeas CS, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Article  Google Scholar 

  28. Petrovic V, Xydeas CS (2005) Objective image fusion performance characterisation. In Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, volume 2, pages 1866–1871. IEEE

  29. Seal A, Bhattacharjee D, Nasipuri M, Rodríguez-Esparragón D, Menasalvas E, Gonzalo-Martin C (2018) PET-CT image fusion using random forest and á-trous wavelet transform. International journal for numerical methods in biomedical engineering 34(3):e2933

    Article  Google Scholar 

  30. Dinh P (2021) A novel approach based on three-scale image decomposition and marine predators algorithm for multi-modal medical image fusion. Biomed Signal Process Control 67:102536

    Article  Google Scholar 

  31. Panigrahy C, Seal A, Mahato NK, Krejcar O, Herrera-Viedma E (2020) Multi-focus image fusion using fractal dimension. Appl Opt 59(19):5642–5655

    Article  PubMed  Google Scholar 

  32. Shreyamsha Kumar BK (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. SIViP 7:1125–1143

  33. Azam MA, Khan KB, Salahuddin S, Rehman E, Khan SA, Khan MA, Kadry S, Gandomi AH (2022) A review on multimodal medical image fusion: compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput Biol Med 144

  34. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Information Fusion 45:153–178

    Article  Google Scholar 

  35. Vajpayee P, Panigrahy C, Kumar A (2023) Medical image fusion by adaptive Gaussian PCNN and improved Roberts operator. Signal, Image and Video Processing, pp 1–9

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Dr. (Maj.) BPS Dhillon, Dhillon Diagnostic & CT Centre, Patiala, India, for his thorough observation, validation, and appreciation of this work.

Author information

Authors and Affiliations

Authors

Contributions

All authors made substantial contributions to the concept, design, and revision of the paper. Methodology by Adarsh Sinha, Rahul Agarwal, Vinay Kumar, Nitin Garg, Dhruv Singh Pundir, Harsimran Singh, Ritu Rani, and Chinmaya Panigrahy; software development by Adarsh Sinha, Rahul Agarwal, Vinay Kumar, Nitin Garg, and Dhruv Singh Pundir; and project administration/supervision by Chinmaya Panigrahy.

Corresponding author

Correspondence to Chinmaya Panigrahy.

Ethics declarations

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

Not applicable

Consent for publication

Not applicable

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sinha, A., Agarwal, R., Kumar, V. et al. Multi-modal medical image fusion using improved dual-channel PCNN. Med Biol Eng Comput (2024). https://doi.org/10.1007/s11517-024-03089-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11517-024-03089-w

Keywords

Navigation