Elsevier

Information Fusion

Volume 64, December 2020, Pages 71-91
Information Fusion

Multi-focus image fusion: A Survey of the state of the art

https://doi.org/10.1016/j.inffus.2020.06.013Get rights and content

Highlights

  • A comprehensive review of existing multi-focus image fusion methods is provided.

  • A comparative study for MFIF is conducted and the relevant resources are released.

  • The current challenges and future trends of multi-focus image fusion are discussed.

Abstract

Multi-focus image fusion is an effective technique to extend the depth-of-field of optical lenses by creating an all-in-focus image from a set of partially focused images of the same scene. In the last few years, great progress has been achieved in this field along with the rapid development of image representation theories and approaches such as multi-scale geometric analysis, sparse representation, deep learning, etc. This survey paper first presents a comprehensive overview of existing multi-focus image fusion methods. To keep up with the latest development in this field, a new taxonomy is introduced to classify existing methods into four main categories: transform domain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods. For each category, representative fusion methods are introduced and summarized. Then, a comparative study for 18 representative fusion methods is conducted based on 30 pairs of commonly-used multi-focus images and 8 popular objective fusion metrics. All the relevant resources including source images, objective metrics and fusion results are released online, aiming to provide a benchmark for the future study of multi-focus image fusion. Finally, several major challenges remained in the current research of this field are discussed and some future prospects are put forward.

Introduction

By generating an all-in-focus image from a set of partially focused images, multi-focus image fusion is an effective way to extend the depth-of-field of optical lenses, which has great significance in the fields of digital photography, optical microscopy, integral imaging, etc. The study on multi-focus image fusion has lasted for over 30 years, during which hundreds of relevant scientific articles have been published. In particular, the last few years have witnessed the great progress of this field with continuous increase in quantity and quality of newly proposed methods. To illustrate this point, Fig. 1 shows the number of related articles published in international journals indexed by Science Citation Index Expanded (SCIE) in the last 15 years (from 2005 to 2019). The statistics are obtained from the Web of Science (WoS) Core Collection database with an additional setting of SCIE library. The topic used for searching is “multi-focus image fusion” or “multifocus image fusion” to avoid omission (the WoS database supports the search manner based on multiple topics at the same time). It can be seen from Fig. 1 that an obvious growing trend exists in the last decade, indicating the increasing dynamism of this research area. A major factor that results in the growth is the fast development of signal/image processing and analysis theories such as sparse representation and deep learning. For example, there is a remarkable growing trend since the year of 2017, which is mainly due to the arising of deep learning based study. Another notable growth stage starts from the year of 2010, owing to the popularization of pixel-based spatial domain methods and sparse representation based methods. The main target of this paper is to provide a comprehensive survey for multi-focus image fusion.

A number of influential survey works on image fusion have been given in the literature, which can be generally grouped into two categories: technique-oriented ones and problem-oriented ones. The former denotes the surveys that concentrate on image fusion based on a specific signal/image processing technique, such as multi-scale transform [1], [2], sparse representation [3] and deep learning [4]. The latter denotes the surveys that focus on a specific image fusion problem, such as medical image fusion [5], [6], infrared and visible image fusion [7], [8] and remote sensing image fusion [9]. Some surveys simultaneously concern the above two factors [10], [11]. For example, Li et al. [10] recently presented an all-round overview for image fusion, in which various fusion approaches and different fusion issues are discussed. Although multi-focus image fusion issue is involved in some of the above surveys [3], [4], [10], a systematic survey that is specifically for it has not yet been found, according to our latest retrieval on Web of Science for journal articles indexed by SCIE. Furthermore, as mentioned above, a considerable number of deep learning based multi-focus image fusion methods have been proposed since 2017, leading to a new trend in this field. However, most of these methods are not referred by the above surveys (only very few of them published in 2017 are involved in [4]). Therefore, it is of great significance to provide a thorough review for multi-focus image fusion, which could be helpful to both the researchers who intend to quickly grasp the recent advances of this field and the practitioners who are interested in applying multi-focus image fusion algorithms to solve their own problems. The main contributions of this survey are outlined as follows.

  • 1.

    A comprehensive overview of existing multi-focus image fusion methods is provided. To keep up with the latest development in this field, a new taxonomy is presented to classify existing fusion methods into four main categories: transform domain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods.

  • 2.

    A comparative study for multi-focus image fusion is conducted. The performances of 18 representative fusion methods on 30 pairs of source images are evaluated using 8 popular objective metrics. All the relevant resources including source images, objective metrics and fusion results have been made available online at https://github.com/yuliu316316/MFIF, which can be used as a benchmark for the future study of multi-focus image fusion.

  • 3.

    The challenges in the current research of multi-focus image fusion are discussed and some prospects for the future study are put forward.

The remainder of this paper is organized as follows. In Section 2, a detailed overview of existing multi-focus image fusion methods is provided. Section 3 briefly introduces the main applications of multi-focus image fusion. A comparative study for multi-focus image fusion is presented in Section 4. In Section 5, we discuss the main challenges in this field and put forward some future prospects. Finally, Section 6 concludes the paper.

Section snippets

Methods

A typical taxonomy for image fusion methods is to classify them into two categories: transform domain methods and spatial domain methods [12]. However, with the fast development of multi-focus image fusion technology, some methods that simultaneously adopt the manners of both transform domain and spatial domain have been proposed in the literature. In addition, deep learning (DL)-based study has recently emerged as a hot spot in multi-focus image fusion. Thus, the original taxonomy can no

Applications

As an effective and low-cost way to extend the depth-of-field of optical lenses, multi-focus image fusion technique has a broad application space in the areas of biology, medicine, industry, agriculture, etc. Specifically, it has been successfully applied to improve the imaging quality for optical microscopy, holographic imaging, integral imaging, laser speckle contrast imaging, thermal imaging, etc.

The most popular application scenario of multi-focus image fusion is in optical microscopy where

Experiments

In this Section, we provide a comparative study for multi-focus image fusion. The performances of 18 representative fusion methods on 30 commonly-used multi-focus image pairs are evaluated using 8 popular objective metrics. All the relevant resources including source images, objective metrics and fusion results have been made available online at https://github.com/yuliu316316/MFIF, which can act as a benchmark for the future study of multi-focus image fusion.

Current challenges

Despite the great progress achieved in recent years, there still remain several major challenges in the field of multi-focus image fusion.

First, the fusion strategy for boundary regions needs further improvement. The boundary regions indicate the regions between focused regions and defocused regions in the source images and they usually locate at the areas where the depth has an abrupt change. The focus property in the boundary regions is complicated as some pixels may be focused while some may

Conclusion

In this survey, we present a comprehensive overview of existing multi-focus image fusion methods. To keep up with the latest development in this field, a new taxonomy is introduced to group existing multi-focus image fusion methods into four main categories: transform domain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods. Each category is further divided into several sub-categories for clear classification. We introduce a number

CRediT authorship contribution statement

Yu Liu: Conceptualization, Methodology, Resources, Writing - original draft. Lei Wang: Resources, Software, Validation. Juan Cheng: Visualization, Writing - review & editing. Chang Li: Resources, Writing - review & editing. Xun Chen: Writing - review & editing, Supervision.

Declaration of Competing Interest

The authors declare that there is no potential conflict of interest.

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Grants 61701160, 61922075 and 41901350), the Provincial Natural Science Foundation of Anhui (Grants 1808085QF186 and 2008085QF285) and the Fundamental Research Funds for the Central Universities (Grants JZ2020HGPA0111 and JZ2019HGBZ0151).

References (264)

  • H. Li et al.

    Multisensor image fusion using the wavelet transform

    Graphical Models and Image Processing

    (1995)
  • I. De et al.

    A simple and efficient algorithm for multifocus image fusion using morphological wavelets

    Signal Processing

    (2006)
  • Y. Chai et al.

    Multifocus image fusion scheme based on features of multiscale products and pcnn in lifting stationary wavelet domain

    Opt. Commun.

    (2011)
  • Y. Chibani et al.

    Redundant versus orthogonal wavelet decomposition for multisensor image fusion

    Pattern Recognit.

    (2003)
  • R. Redondo et al.

    Multifocus image fusion using the log-gabor transform and a multisize windows technique

    Information Fusion

    (2009)
  • A. Baradarani et al.

    Tunable halfband-pair wavelet filter banks and application to multifocus image fusion

    Pattern Recognit.

    (2012)
  • J. Lewis et al.

    Pixel- and region-based image fusion with complex wavelets

    Information Fusion

    (2007)
  • J. Tian et al.

    Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure

    Signal Processing

    (2012)
  • S. Aymaz et al.

    A novel image decomposition-based hybrid technique with super-resolution method for multi-focus image fusion

    Information Fusion

    (2019)
  • B. Yu et al.

    Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion

    Neurocomputing

    (2016)
  • Q. Zhang et al.

    Multifocus image fusion using the nonsubsampled contourlet transform

    Signal Processing

    (2009)
  • Q. Miao et al.

    A novel algorithm of image fusion using shearlets

    Opt. Commun.

    (2011)
  • G. Easley et al.

    Sparse directional image representations using the discrete shearlet transform

    Appl. Comput. Harmon Anal.

    (2008)
  • B. Zhang et al.

    Multi-focus image fusion algorithm based on compound pcnn in surfacelet domain

    Optik (Stuttg)

    (2014)
  • X. Qu et al.

    Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain

    Acta Automatic Sinica

    (2008)
  • W. Kong et al.

    Fusion technique for multi-focus images based on nsct-iscm

    Optik (Stuttg)

    (2015)
  • H. Zhao et al.

    Multi-focus image fusion based on the neighbor distance

    Pattern Recognit.

    (2013)
  • Z. Dong et al.

    A general memristor-based pulse coupled neural network with variable linking coefficient for multi-focus image fusion

    Neurocomputing

    (2018)
  • D. Bavirisetti et al.

    Multi-focus image fusion using multi-scale image decomposition and saliency detection

    Ain Shams Eng. J.

    (2018)
  • J. Nunes et al.

    Image analysis by bidimensional empirical mode decomposition

    Image Vis. Comput.

    (2003)
  • X. Qin et al.

    Multi-focus image fusion based on window empirical mode decomposition

    Infrared Physics & Technology

    (2017)
  • B. Yang et al.

    Pixel-level image fusion with simultaneous orthogonal matching pursuit

    Information Fusion

    (2012)
  • Y. Jiang et al.

    Image fusion with morphological component analysis

    Information Fusion

    (2014)
  • H. Yin et al.

    A novel sparse-representation-based multi-focus image fusion approach

    Neurocomputing

    (2016)
  • X. Ma et al.

    Multi-focus image fusion based on joint sparse representation and optimum theory

    Signal Process. Image Commun.

    (2019)
  • M. Nejati et al.

    Multi-focus image fusion using dictionary-based sparse representation

    Information Fusion

    (2015)
  • Q. Zhang et al.

    Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency

    Pattern Recognit.

    (2018)
  • Z. Zhou et al.

    Multi-scale weighted gradient-based fusion for multi-focus images

    Information Fusion

    (2014)
  • J. Sun et al.

    Poisson image fusion based on markov random field fusion model

    Information Fusion

    (2013)
  • N. Mitianoudis et al.

    Pixel-based and region-based image fusion schemes using ica bases

    Information Fusion

    (2007)
  • Z. Zhang et al.

    A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application

    Proc. IEEE

    (1999)
  • A. Dogra et al.

    From multi-scale decomposition to non-multi-scale decomposition methods: a comprehensive survey of image fusion techniques and its applications

    IEEE Access

    (2017)
  • T. Stathaki

    Image fusion: algorithms and applications

    (2008)
  • P. Burt et al.

    Merging images through pattern decomposition

    Applications of Digital Image Processing VIII. International Society for Optics and Photonics

    (1985)
  • P. Burt et al.

    The laplacian pyramid as a compact image code

    IEEE Transcations on Communications

    (1983)
  • P. Burt et al.

    Enhanced image capture through fusion

    Proceedings of IEEE Interational Coference on Computer Vision (ICCV)

    (1993)
  • V. Petrovic et al.

    Gradient-based multiresolution image fusion

    IEEE Trans. Image Process.

    (2004)
  • X. Jin et al.

    A lightweight scheme for multi-focus image fusion

    Multimed Tools Appl.

    (2018)
  • L. Kou et al.

    A multi-focus image fusion method via region mosaic on laplacian pyramids

    PLoS ONE

    (2018)
  • S.G. Mallat

    A theory for multiresolution signal decomposition: the wavelet representation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (1989)
  • Cited by (206)

    View all citing articles on Scopus
    View full text