Next Article in Journal
Tracking of Deformable Objects Using Dynamically and Robustly Updating Pictorial Structures
Previous Article in Journal
μXRF Mapping as a Powerful Technique for Investigating Metal Objects from the Archaeological Site of Ferento (Central Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library

by
Rabia Zafar
,
Muhammad Shahid Farid
* and
Muhammad Hassan Khan
Punjab University College of Information Technology, University of the Punjab, Lahore 54000, Pakistan
*
Author to whom correspondence should be addressed.
J. Imaging 2020, 6(7), 60; https://doi.org/10.3390/jimaging6070060
Submission received: 9 May 2020 / Revised: 21 June 2020 / Accepted: 24 June 2020 / Published: 2 July 2020

Abstract

:
Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine perception. Different image combination methods have been presented to consolidate significant data from a collection of images into one image. As a result of its applications and advantages in variety of fields such as remote sensing, surveillance, and medical imaging, it is significant to comprehend image fusion algorithms and have a comparative study on them. This paper presents a review of the present state-of-the-art and well-known image fusion techniques. The performance of each algorithm is assessed qualitatively and quantitatively on two benchmark multi-focus image datasets. We also produce a multi-focus image fusion dataset by collecting the widely used test images in different studies. The quantitative evaluation of fusion results is performed using a set of image fusion quality assessment metrics. The performance is also evaluated using different statistical measures. Another contribution of this paper is the proposal of a multi-focus image fusion library, to the best of our knowledge, no such library exists so far. The library provides implementation of numerous state-of-the-art image fusion algorithms and is made available publicly at project website.

1. Introduction

Cameras usually have limited focusing capabilities. These limitations exist due to the limited depth of field (DOF) of the optical lenses of traditional cameras. Limited DOF means that the cameras can focus a particular area and the rest of the scene remains unfocused [1,2]. Entities that are at a certain interval or in focus of the camera are captured clearly and sharply, but the objects that are in front or behind the focus of the camera lens remain blurry [3,4,5]. However, in many fields e.g., medical imaging, geographical imaging, remote sensing, and image transmission [6,7,8,9,10,11], there is a need of such images that are clear and sharp so that the interpretation and the analysis of images for different purposes can be done more efficiently and effectively. Image fusion can be used to merge multiple images captured from same or different modalities to gather additional information. For example, in medical, some tests provide information about the bony structure and some about the tissues of a certain organ, but it will be helpful if the doctor can have a single image that describes both functional and anatomical information which can be used for assessment and to plan surgical procedure [12,13]. Similarly, radiologists prefer to use integrated images for diagnostic and treatment of cancer.
In remote sensing, a remote location is analyzed and examined by satellite e.g., to estimate or detect damages in an area that is exposed to an earthquake. The equipment used does not provide convincing data. The studies have established the fact that image processing in different fields also needs images with both high spatial and spectral resolution [14,15,16]. Hence, images captured from different satellites, such as SPOT PAN, LANDSAT are fused to generate an image with high resolution [16,17,18,19]. When camera types come into consideration, each type provides us images with different information such as the image captured by infrared camera offers information that lies in the infrared spectrum and digital color camera includes information that lies in the visible spectrum. Both of these sensors complement each other’s information, e.g., in surveillance applications, a better assessment can be done with image having both type of information. Hence, merging of images can be done for effective analysis and to have the better understanding of situation [20,21,22]. The fused images not only overcome the focus constraint, they can also be easily interpreted by both human and machine.
Image fusion not only makes interpretation better for the machine and human but also reduces the image transmission cost [6,23,24]. This reduction can be achieved by fusion, as after that there is no need to transmit multiple images of the same scene having the different part in focus. There will be a single image that is all-in-focus. A general multi-focus image fusion algorithm estimates a focus map for each input image. This focus map categorizes each pixel in the image as focused or defocused, discretely or continuously. The maps are used to define a fusion rule which is responsible for creating an all-in-focus image. A block diagram of such an algorithm is shown in Figure 1.
Multi-focus image fusion has received significant research efforts lately resulting in the proposal of numerous techniques e.g., [25,26,27,28,29,30,31,32]. In this paper, we review the recent literature on multi-focus image fusion and evaluate their performance visually and objectively. The performance of image fusion quality assessment metrics is also evaluated by computing different correlations. Moreover, we also present a library which offers the implementation of 24 well-known multi-focus image fusion algorithms.
The rest of the paper is organized as follows. In Section 2, we describe the criteria of effectiveness of an image fusion algorithm and group the existing techniques into different categories. Transform domain based image fusion techniques are reviewed in Section 3 and spatial domain based approaches are discussed in Section 4. The objective evaluation and the visual comparison of the results of compared methods are presented in Section 5. Section 6 introduces the proposed multi-focus image fusion library, and Section 7 draws the conclusions of this research.

2. Image Fusion Approaches and Criteria of Effectiveness

There are different algorithms to perform fusion, but fusion technique that needs to produce effective results should satisfy the following conditions as recommended in [24,33,34]:
  • The relevant information of the source images should remain preserved.
  • There should not be any inconsistencies in the fused image.
  • The noise and irrelevant information should be removed or minimized as much as possible.
Numerous image fusion methods have been proposed in the recent years. Based on their representation, they are categorized into two main groups: spatial domain and transform domain [20,35,36]. The spatial domain methods involve direct execution of any calculation or procedure on the original value of the pixel. The spatial domain fusion is achieved by using localized spatial features such as pixel or regions of images [32]. The fusion procedure in spatial domain includes the combination of pixels or features that depict focused parts of the source images. Focus measures such as the energy of Laplacian or spatial frequency are used to make decisions about the focused parts of the image. We further categorized the spatial domain methods by different image representations and processing levels into three classes: pixel level fusion [28,31,36], feature level fusion [37,38], and decision level fusion [39].
The transform domain based image fusion involves conversion of source images into transformation coefficients [18,32]. These coefficients are than fused together to get the fused image coefficients which are given to inverse transform to get resultant image. Wavelets transform [29,40,41], discrete cosine transform [42], etc. are some example of transform domain fusion. Researchers moved from spatial domain to transform domain because the latter is considered to represent the salient features of images clearly and accurately as compared to former one. However the transform domain fusion also follows the categorization as that of spatial domain fusion—pixel, feature, and decision level fusion. In the sense of its implementation, it is more significant to classify them on the basis of the transform used in fusion e.g., wavelets transform based image fusion [28,37,38,43], curvelet transform based fusion, and discrete cosine transform based image fusion [44]. Figure 2 shows our categorization of multi-focus image fusion algorithms in to different groups.

3. Multi-Focus Image Fusion in Transform Domain

Transform domain based multi-focus image fusion involves the conversion of source images into transformation coefficients and after applying fusion procedure the image is converted back to its own space. We broadly categorized the transform based image fusion techniques into three groups: wavelets based, curvelet based, and DCT based fusion.

3.1. Wavelets Based Image Fusion Techniques

The wavelet transform has been extensively explored for image fusion resulting in the proposal of a number of algorithms. The image fusion algorithm proposed in [24] converts the source images in to smaller wavelets by using filters. The images are first exposed from low-pass and high-pass filters horizontally and vertically along-with donwsampling. The fused image is generated by averaging approximation bands I L L of the original images and by taking the largest coefficient of each detailed sub-band. The wavelet based statistical sharpness measure (WSSM) [65] exploits marginal distribution to extract the local content information. The input images are decomposed and wavelet coefficient distribution is generated using two-component Laplacian model. Later, the approximation subband of the fused image is generated by taking the weighted average of region entropy that is calculated for each approximation coefficient using its detail subband’s coefficient. In [66], the source images are decomposed to low and high subbands. The fused sparse vector is obtained by selecting the maximum of the sparse vectors of both images. For high pass bands, the coefficient with the maximum value is selected as fused high pass band coefficient. In Non-Subsampled Contourlet Transform method [56], for each source image low pass sub band and band-pass directional sub band coefficients are generated. Then these sub bands are exposed to fusion rules.

3.2. Curvelet Based Image Fusion Techniques

In [18], it is demonstrated that the wavelets may not be effective when the images do not show isotropic scaling. To this end, curvelet transform based image fusion algorithm is proposed in [44]. They claim that curvelet transform is best suited for edge representation, moreover its coefficients are not affected by noise. The source images are transformed into coefficients using curvelet transform which are given as an input to the wavelet transform. The inverse curvelet transform is applied on coefficients obtained from inverse wavelet transform to get the final fused image.

3.3. Discrete Cosine Transform Based Image Fusion Techniques

Discrete cosine transform (DCT) [69] is a popular mean for image fusion due to its energy compactness property. In image fusion using discrete cosine transform based laplacian pyramid (DCTLP) algorithm [42], DCT is used as a reduction function to form the Laplacian pyramid. For each level, only the first half in both directions of the source image is taken as input for each next level pyramid image. The highest pyramid level is fused using the average rule i.e., the average of both band pass images is computed to obtain the next level image. The process is repeated till the last level. In [22], the discrete cosine wavelet coefficients of a particular image are calculated, then on each of the subbands DCT is applied. After computing all coefficients, the two images are fused using pixel significance details which are computed as the ration between wavelet coefficients at each level. The DCT based image fusion is efficient and time-saving but it has some limitations, such as, blocking artifacts generation and blurriness [67]. To this end, DCT with variance is implemented for fusion in [67].

4. Multi-Focus Image Fusion in Spatial Domain

The spatial domain based multi-focus image fusion algorithms operate directly on the image pixels to obtain all-in-focus image. We divide these approaches in to three categories: pixel-based, feature-based, and decision-based.

4.1. Pixel Based Multi-Focus Image Fusion Techniques

Pixel-based methods are the lowest level of fusion techniques in which the image focused data are taken from every pixel. Many multi-focus image fusion techniques suffer from different artifacts such as ringing and misregistration of boundary pixel. The dictionary based sparse representation algorithms [34,46,47,70] perform better in such cases. Sparse representation, however, has limited adeptness in detail preservation and high sensitivity to misregistration [20]. To solve these issues, convolutional sparse representation (CSR) is proposed in [20] which computes the sparse coefficients of the whole image rather than patches. It is shift invariant which helps to improve quality in misregistered regions. The guided filtering [45] is used mostly in applications when edges are needed to be preserved. In the guided filtering fusion (GFF) of images [49], the images are decomposed into the base and detail layers by using average filters. A focus map is constructed using the saliency map computed by applying Laplacian and Gaussian low pass filters on input images.
The image fusion using matting (IFM) [50] differentiates the foreground from the background using image matting and this division is used to fuse the source images. Image matting [51] measures the transparency of foreground known as alpha matte. The cross bilateral filtering (CBF) based fusion method [52] considers both geometric and gray level properties of images for integration. In [53], quad tree decomposition is used and decision is taken based on the gradient information in each patch. Self-similarity and depth information (SSDI) approach [54] identifies the similarity between the visual articles in the images which are used to obtain the fused image.
Image orientation information (OI) and pulse-coupled neural network (PCNN) is used for fusion in [55,56], respectively, by taking the orientation information of source images as a feature. In [57], gradient based focus measure is calculated for each region and a decision map is created by coping indices of regions with greater focus. In [58], fusion is done by finding two focus measures using gradient. One is used to find exact focus parts from source images whereas other one is used to find boundary pixel values. In the fusion method in [59], the decision map is computed using pixel luminance and gradient.

4.2. Feature-Based Multi-Focus Image Fusion

In feature based image fusion, the features such as edge details, texture, etc. are extracted from the source images and used to construct the fused image. There are numerous feature-based multi-focus image fusion algorithms e.g., DSIFT [33]. In DSIFT, local feature descriptors are extracted using the SIFT algorithm [71]. Activity level features are computed based on the local gradient and a patch is marked as focus if the average of its coefficients is greater than the corresponding patches in the other images. The focus map is then used to perform the fusion. The principal component analysis (PCA) based image fusion [24] uses co-variance and eigenvectors to get fused images. The multi-exposure fusion method proposed in [61] estimates the fused map using local contrast, exposure quality, and spatial consistency. The method in [21] uses independent component analysis to get the fused image.

4.3. Decision-Based Multi-Focus Image Fusion

In decision level fusion, the source images are exposed to different algorithms who act as local decision makers e.g., genetic algorithm [39]. In this method, the edges in the source images are detected and used with genetic search to find optimum weights from set of features, mean, standard deviation, and three central moments. These features are given as input to genetic search and after multiple iterations of searching it returns two optimum weights for each image. These two optimum weights are than multiplied with respective images and added to generate the fused image. The decision-based fusion algorithms are also exploited for sensor data and biometric data fusion e.g., [62,63,64].

5. Performance Evaluation

In this section, we evaluate the performance of 24 mulit-focus image fusion algorithms. The list of selected methods is presented in Table 1.
The evaluation is performed qualitatively and quantitatively. In particular, we used 12 objective fusion quality assessment metrics to extensively evaluate their results. We also rank the algorithms using the Borda count technique [73,74,75] to find the top performers. Moreover, an analysis of the fusion quality assessment techniques is also presented. A time complexity analysis is also carried out to measure the overall effectiveness of the algorithm.

5.1. Performance Evaluation Datasets

The performance of MIF algorithms is evaluated on two datasets: Lytro [33] and Grayscale [60]. The former contains 20 pairs of colored multi-focus images, the dimension of each image is 520 × 520 . Grayscale dataset consists of 11 widely used pairs of multi-focus images collected from different sources. Their dimensions vary from 160 × 160 to 944 × 736 . These datasets contain indoor, outdoor, and aerial images. The image pairs are registered and each image pair has different level of details. The images in each pair are complimentary i.e., the focus region in one image is defocus in the other and vice-versa. Thumbnails of both datasets are shown in Figure 3 and Figure 4. To perform the qualitative and objective evaluations, the fused images are obtained using each algorithm on each multi-focus image set. Most of the results reported in this paper are generated from the source code provided directly by authors of the respective papers. There are only few algorithms that are coded by us or their implementations provided by third party are used. In either case, the same parameters recommended in the respective papers are used to obtain the results. Moreover, most of the algorithms do not report the results on all images or on both datasets used in our study, the results obtained on the common images are compared to verify the correctness of the implemented algorithm.

5.2. Qualitative Evaluation

The qualitative evaluation is performed by comparing the visual quality of the fused images. To perform this comparison, we carefully evaluated the results achieved by the compared methods on all test image pairs. However, for conciseness, we chose one representative image from both datasets and report the comparison of the results achieved by the competing algorithm. From Lytro dataset, the ‘fence’ image set is selected as the fence has cross-section areas which give us more details of edge and boundary information. From Grayscale dataset, the ‘clock’ image pair is selected which has been widely used in the image fusion literature for performance evaluation. To better express the outcomes of the visual inspection, the regions with artifacts are highlighted with red rectangles. Note that the image fusion algorithms QTD, SSDI, MSMFMg, NSCT, and OI use intensity images as input and therefore the resultant fused images are grayscale.
Figure 5 shows the fusion results achieved by the compared methods on the fence image set. The results show that the fused images generated by MSMFM, MSMFMg, IFGD, and SSDI methods have unfocused fence regions as highlighted in Figure 5. In the results of CSR, few parts of the foreground are not sharp enough as in original images. Furthermore, the boundary of foreground and background regions are little blurred, and the color of the floor exhibits the reduction in contrast and brightness. In the DCT based approaches, DCHWT, DCTLP, and PCA, misregistration of edges is evident. The fused images look distorted, the correlation between colors also not appeared to be as agreeable as in focused part of source images. In addition to that, the ringing artifact is visible on the wall and around the boundaries of the fence.
In the fusion results of DCHWT method, the rippling artifacts appeared near the edges. Moreover, the details are not sharp, e.g., net’s pillar and people standing at the distance from the camera are indistinct-able. In case of DSIFT, in the fused image the top middle part of the fence is blurred. The foreground image is not fused well, there are solid lines, and the boundaries of focused and unfocused regions are over sharpened. The GFF method has adequate results but in this technique, fence joints are blurred and foreground image is little brighter too. However, the GFF preserves color contrast and brightness. The result of ICA exhibits blurriness near shooter’s head, near the boundaries of focused and unfocused regions and the door. The foreground is transparent near the ball. The IFM results suffer from the color distortion on the fence and the ball. Details of girl’s face are also missing. It appears that it is merged with the wall’s color and contrast.
The fused images created by the PCA algorithm are highly blurred and exhibits the ringing effect. In this example, the edges of the fence are not defined and the brightness of the image is reduced. The fusion results of WSSM method show that the edges are not fused perfectly. The fence is distorted and suffer from ghost effect at a number of regions. Moreover, the fused image has ringing effect near the wall. In the results of GIF, the fence is not clear—it is transparent and unfocused at some regions. The results of MWGF and NSCT methods still have minor unfocused regions as highlighted with red rectangles in Figure 5. The results of CBF algorithm suffer from the face over sharpening, reduction in color contrast, transparency and existence of unfocused regions at different sections. Similar artifacts can be spotted in in the fusion results achieved by the DSIFT2 and GRW methods. The results of DCTV and OI algorithms are very poor, resultant images have spherical artifacts, unfocused regions, and over sharpening of edges. The results show that the QTD algorithm performs better than other compared methods, its fusion results are free from most artifacts.
The performance of multi-focus image fusion approaches on the Grayscale dataset is discussed with the help of the clock test image pair. This image pair shows two clocks, one image shows the foreground clock in focus and in the other image the background clock is focused. The results achieved by the compared methods on this image pair are shown in Figure 6. The result shows that MSMFM and MSMFMg techniques do not exhibit any type of undecided pixel focus as well as no ghost effected area exists. Moreover, the ringing effect does not appear in the resultant images. The SSDI algorithm also shows good visual result, the only problem it faces is a small unfocused portion at the bottom left boundary of the foreground clock. A number of regions are not in focus in QTD and IFGD results. The fused image created by CSR algorithm exhibits distortion on the left boundary of the smaller timepiece. The result obtained from DCHWT approach has certain horizontal and vertical artifacts on the lower side and the upper side of the image respectively. The upper region of the clock is pixelated and similar horizontal and vertical lines are also visible.
The fused image generated by the DCTLP algorithm does not have color contrast as that exists in the original images. The DSIFT fused image exhibits good quality except few regions are grainy and has blurriness at the boundary of focused part of the foreground and the background image. The blurriness can also be spotted in the fusion results of GFF, GIF, and IFM techniques. The PCA and WSSM do not show convincing results as can be seen from Figure 6, the PCA fused image is blurry and color contrast is also not appropriate. Whereas, WSSM and MWGF exhibit structural distortion and ghost regions on the right side of the foreground clock and upper side of background clock. The fusion results achieved by CBF and MSTSR techniques suffer from color distortion and the so called grainy effect due to imperfect fusion maps. The fusion results of DCTV and OI technique are blurry and suffer from structural distortions.
From the visual comparison of the results achieved by the compared methods presented in Figure 5 and Figure 6, we conclude that considering both datasets, the visual comparisons come to the agreement that DSIFT, QTD, GFF, NSCT, and MSTSR are among the five best fusion methods for the Lytro dataset and MSMFM, MSMFMg, DSIFT, QTD, and SSDI are the best five for the Grayscale dataset.

5.3. Quantitative Evaluation

It is not easy to evaluate the performance of fusion algorithms. For investigation of effectiveness and evaluation of performance, two practices can be followed. First is to compare the fused image with a ground truth image (reference image), but in most practical applications, the ground truths are not available. Due to this reason the second practice came into existence, which is to assess the fused image blindly without using reference images. To validate the performance of an algorithm, along with the visual assessment the objective assessment is necessary. Therefore, numerous objective assessment models for evaluation of fusion parameters are proposed e.g., [76,77,78,79,80].
An extensive objective evaluation is performed to assess the fusion quality of the compared multi-focus image fusion algorithms. In particular, we used 12 different objective fusion quality assessment metrics in this evaluation. These metrics use various image characteristics to assess its quality and based on these characteristics they are divided in to four groups [77].
  • Information theory based metrics measure the quality of the fused image using probability based methods i.e., mutual information, divergence, and correlation, between the fused image and the source images.
  • Feature based metrics estimate the quality of a fused image by considering different type of features, such as, gradient, edge information, spatial frequency, etc.
  • Structural similarity based metrics compare the structural information of the fused image and the source images to estimate the fusion quality.
  • Human perception based metrics consider the contrast, overlapping regions, misregistration of pixel, and edge information in estimating the quality of the fused image with respect to the source images.
The metrics used in this evaluation with their respective category are listed in Table 2.
The fusion results obtained by each algorithm on both datasets are evaluated using all 12 objective quality assessment metrics listed in Table 2. In this evaluation, the fusion quality assessment library proposed in [77] is used. The detailed evaluation results on the Grayscale dataset are presented in Figure 7 and on the Lytro dataset are shown in Figure 8. The Q M I evaluates the fused images on the basis of joint and marginal probability distribution of the fused image and the source images. It ranks MSMFMg, DSIFT, and QTD as the three best algorithms for the Grayscale and QTD, MSMFM, and SSDI for Lytro. The Q N C I E metric takes non-linear correlation coefficient into account while evaluating the fused image. It ranks OI, MSMFM, and QTD as top algorithms for both datasets. The Q T E uses entropy to calculate quality of fused images and it ranks OI, DSIFT2, and NSCT on top of ranking for the Grayscale dataset. It evaluates ICA, GFF, and DSIFT2 as the top performing algorithms for the Lytro dataset. VIFF uses different models such as Gaussian scale mixture model for evaluating fused image quality. According to VIFF metric evaluation, IFGD, ICA, and MSTSR are ranked as the best performers for Grayscale whereas IFGD, MSTSR, and QTD for the Lytro dataset. The Table 3 summarizes the evaluation results based on information theory metrics and presents the best three image fusion algorithms for both datasets.
The Q G metric calculates edge preservation using Sobel edge detection operator as well as orientation to evaluate the fused image quality. According to this metric, MSMFMg, GIF, and QTD are amongst the best for the Grayscale dataset and MSMFM, DSIFT, and QTD for the Lytro dataset. The Q S F metric takes gradient information in four different directions of fused image and source images in consideration. In evaluations Q S F ranked IFGD, ICA, and DCTV as are the best three algorithms for the Grayscale dataset, whereas for the Lytro dataset, DSIFT, DCTV, and IFM are top ranked. The Q P uses the phase congruency information of the fused and source images to assess the quality. On the basis of its evaluations GIF, DSIFT, and MSMFMg are the top performing algorithms on the Grayscale dataset and GFF, GIF, and MSMFM for Lytro dataset. The Q M uses edge information calculated by low- and high-pass components of wavelet. The results of feature based metrics evaluation are summarized in Table 4 that shows that DSIFT, QTD, and DCTV are amongst the best algorithms for both the Grayscale and Lytro datasets. The same results are also identifiable in our visual inspection.
The structural similarity based metric Q S evaluates the fused image quality on the basis of variance. It ranks ICA, DSIFT2, and CBF as the best algorithms for the Grayscale dataset and ICA, CBF, and MSTSR for the Lytro dataset. The Q Y which takes other statistical features such as correlation, co-variance, and edge dependent information into account too while evaluating the fused image quality. It ranks GIF, MSMFMg, and QTD as the top algorithms for the Grayscale dataset and MSMFM, QTD, and DSIFT for the Lytro images. The results of structural similarity based metrics are shown in Table 5.
The summary of performance evaluation using human perception based metrics Q C V and Q C B is presented in Table 6. The Q C V metric considers the edge quality, similarity measurement of local regions, and global quality measurement of non-overlapping regions in evaluating the fused image quality. This metric ranks IFGD and OI as the best algorithms for both the Lytro and Grayscale datasets. The Q C B metric uses contrast based features to assess the fusion results. It assesses GIF, MSMFMg, and QTD as the best for the Grayscale and MSMFM and DSIFT for the Lytro dataset.

5.4. Borda Count Ranking of Image Fusion Algorithms

The Borda count [73,74,75] is a voting technique that ranks candidates according to voters preferences. The preferences are converted to scores and the candidate that has maximum score becomes the winner, the second highest scorer gets the second spot and so on. We use this technique here to rank the image fusion algorithms based on their ratings determined by the objective image fusion quality assessment metrics. In the present scenario, since there are 24 algorithms being evaluated, therefore an algorithm is assigned an integral value between 1 and 24 based on its performance measured by an objective quality metric. The scores are given to each algorithm in reverse proportion to their ranking. That is the best performing approach is assigned score 24 and the worst is assigned 1 score. For each fusion algorithm, these scored are accumulated to obtain an aggregated score which decides its rank.
The Borda count scores that each algorithm received on the Grayscale and Lytro datasets are presented in Table 7. The statistics reveal that for the Grayscale dataset MSMFM, MSMFMg, QTD, DSIFT, and GIF algorithms are the best five, highlighted in bold. Interestingly, these results are the same as those obtained from visual evaluation. On the Lytro dataset, the Borda count rated QTD, DSIFT, MSMFMg, MSMFM, and SSDI as the best five fusion techniques. To get an overall picture of this analysis, the scores obtained by a method on each dataset are aggregated. The algorithms are ranked based on the aggregated scores and the results are presented in Figure 9. The results show that MSMFMg is rated as the best algorithm with an aggregate score of 462, followed by QTD, MSMFM, and DSIFT with very close scores of 459, 454, and 453, respectively.

5.5. Summary

A summary of the findings of qualitative and quantitative evaluations is presented in Table 8. It shows that the results of both evaluations on each dataset and also an overall assessment. On the Grayscale dataset, the results of the visual and objective evaluations are the same except the visual evaluation includes SSDI where as the objective assessment brings GIF in the five best algorithms. On the Lytro dataset, the lists visually and objectively five best algorithms are the same except three differences, the qualitative list includes MSTSR, GIF, and NSCT whereas the objective list includes MSMFM, SSDI, and MSMFMg. The overall evaluation considering both datasets is the same as that of the Grayscale dataset. That is, the list of the best five algorithms contain approximately the same methods with slightly different ordering. From the statistics presented in Table 8, it can be noted that the results of the visual and the objective evaluations mostly agree, confirming the results and authenticating the effectiveness of the best rated image fusion methods.
Another interesting fact notable from Table 8 is that all the best performing methods are spatial domain based. To investigate it further, in Table 9, we list the five best performing MIF algorithms of each category with their Borda count based rank (Figure 9). The results reveal that the best eight methods among the 24 compared methods are spatial domain based algorithms. In the transform domain based methods, DCTV is the best performing and ranked at number 9 among the compared methods. Since we grouped the methods in each category into different groups, the best ranked algorithm in each group with its BC ranking is presented in Table 10. These statistics further shed light on the most suited domain/representation for efficient multi-focus image fusion. The results show that in spatial domain, the pixel based MIF algorithms are particularly performing better than other groups. Moreover, in the transform domain, the DCT based method is of particular interest. These are very interesting facts which need further investigation to discover the limitations of the frequency domain for multi-focus image fusion. We observed that the better performance of the spatial domain based methods is due to their accurate detection of focused and defocused regions which leads to crisp fusion results; such precise segmentation is not witnessed in most frequency domain methods that suffer with different artifacts e.g., ghost effect.

5.6. Computational Time Complexity Comparison

We also evaluate the image fusion algorithms on their computational time complexity. To this end, the execution time of each algorithm is computed on all multi-focus image pairs of the Grayscale and Lytro datasets. All algorithms were executed with the default parameters as described in the respective papers. The evaluation is performed in Matlab environment on Intel©Core™i5 processor with 4 GB RAM and 64-bit Windows 10 operating system. The execution times reported here do not include the file I/O time.
For each dataset, the execution time for each image set is computed and averaged. The average execution time over both datasets is calculated for each method and the results are reported in Figure 10. To ease the analysis, the results are arranged in non-decreasing order of average time for both datasets, the bars are in blue. The results show the as many as 10 algorithms take around 1 s on average to fuse a pair of images. Seven algorithms perform fusion in average from 1 to 10 s and the other 7 methods are computationally expensive consuming 40 to 430 s per image pair. These techniques are mostly sparse representation based e.g., CSR, MSTSR, others use wavelets with pyramid e.g., WSSM which significantly increases their execution time. The method NSCT uses curvelet in combination with wavelet method and consumes more time than simple wavelet methods because of two multi-scale decomposition procedures.

6. Fusion Library

We also introduce a multi-focus image fusion library which provides the implementation of all the 24 multi-focus image fusion algorithms selected for evaluation in this paper, listed in Table 1. The library is implemented in MATLAB and is provided as a standalone component which contains all the dependencies, making it extremely easy to use. The current version of the proposed library supports more than 24 image fusion methods. However, this library will be kept updated by including the support for more image fusion techniques. We also encourage the multi-focus image fusion research community to provide their contributions in this field to be included in this library. The multi-focus image datasets and the library is available free at the project web-page: http://www.di.unito.it/~farid/Research/FusionLib.html.

7. Conclusions

The multi-focus image fusion techniques merge focused part of images of the same scene that are captured with different focus settings to obtain a single all-in-focus image. The fused image has extended depth and is effortless to be interpreted by both human and machine. The main goal of image fusion is to incorporate complementary parts of different images to get the advantageous understanding of a scenario. It increases the detail of an image and improves result’s reliability. In this study, numerous multi-focus image fusion techniques are reviewed and tested on two datasets for comparison and analysis of their performance. For evaluation of results both qualitative and quantitative approaches are considered. The second contribution of this paper is the proposal of an image fusion library. The library provides implementation of 24 image fusion methods. It is easy to use, implemented in Matlab and released free for public and peer use.

Author Contributions

Conceptualization, R.Z. and M.S.F.; methodology, R.Z., M.S.F., and M.H.K.; software, R.Z., and M.S.F.; validation and supervision, R.Z., M.S.F., and M.H.K.; writing—original draft preparation, R.Z. and M.S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We acknowledge the authors of different image fusion algorithms for providing the source code of their methods.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wan, T.; Zhu, C.; Qin, Z. Multifocus image fusion based on robust principal component analysis. Pattern Recognit. Lett. 2013, 34, 1001–1008. [Google Scholar] [CrossRef]
  2. Xiao, B.; Ou, G.; Tang, H.; Bi, X.; Li, W. Multi-focus Image Fusion by Hessian Matrix-Based Decomposition. IEEE Trans. Multimed. 2020, 22, 285–297. [Google Scholar] [CrossRef]
  3. Zhang, Q.; Guo, B.-L. Multifocus image fusion using the nonsubsampled contourlet transform. IEEE Trans. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
  4. Guo, X.; Nie, R.; Cao, J.; Zhou, D.; Mei, L.; He, K. FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network. IEEE Trans. Multimed. 2019, 21, 1982–1996. [Google Scholar] [CrossRef]
  5. Kou, F.; Wei, Z.; Chen, W.; Wu, X.; Wen, C.; Li, Z. Intelligent Detail Enhancement for Exposure Fusion. IEEE Trans. Multimed. 2018, 20, 484–495. [Google Scholar] [CrossRef]
  6. Amin-Naji, M.; Aghagolzadeh, A. Multi-Focus Image Fusion in DCT Domain using Variance and Energy of Laplacian and Correlation Coefficient for Visual Sensor Networks. J. AI Data Min. 2018, 6, 233–250. [Google Scholar]
  7. Li, H.; Jing, L.; Tang, Y.; Wang, L. An Image Fusion Method Based on Image Segmentation for High-Resolution Remotely-Sensed Imagery. Remote Sens. 2018, 10, 790. [Google Scholar] [CrossRef] [Green Version]
  8. Dou, W. Image Degradation for Quality Assessment of Pan-Sharpening Methods. Remote Sens. 2018, 10, 154. [Google Scholar] [CrossRef] [Green Version]
  9. Cao, T.; Dinh, A.; Wahid, K.A.; Panjvani, K.; Vail, S. Multi-Focus Fusion Technique on Low-Cost Camera Images for Canola Phenotyping. Sensors 2018, 18, 1887. [Google Scholar] [CrossRef] [Green Version]
  10. Li, Q.; Yang, X.; Wu, W.; Liu, K.; Jeon, G. Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter. Sensors 2018, 18, 2143. [Google Scholar] [CrossRef] [Green Version]
  11. Ganasala, P.; Kumar, V. Multimodality medical image fusion based on new features in NSST domain. Biomed. Eng. Lett. 2014, 4, 414–424. [Google Scholar] [CrossRef]
  12. Laganà, M.M.; Preti, M.G.; Forzoni, L.; D’Onofrio, S.; De Beni, S.; Barberio, A.; Pietro, C.; Baselli, G. Transcranial Ultrasound and Magnetic Resonance Image Fusion With Virtual Navigator. IEEE Trans. Multimed. 2013, 15, 1039–1048. [Google Scholar] [CrossRef]
  13. Du, J.; Li, W.; Tan, H. Intrinsic Image Decomposition-Based Grey and Pseudo-Color Medical Image Fusion. IEEE Access 2019, 7, 56443–56456. [Google Scholar] [CrossRef]
  14. Wang, T.; Chiu, C.; Wu, W.; Wang, J.; Lin, C.; Chiu, C.; Liou, J. Pseudo-Multiple-Exposure-Based Tone Fusion With Local Region Adjustment. IEEE Trans. Multimed. 2015, 17, 470–484. [Google Scholar] [CrossRef]
  15. Hu, H.; Wu, J.; Li, B.; Guo, Q.; Zheng, J. An Adaptive Fusion Algorithm for Visible and Infrared Videos Based on Entropy and the Cumulative Distribution of Gray Levels. IEEE Trans. Multimed. 2017, 19, 2706–2719. [Google Scholar] [CrossRef]
  16. Borsoi, R.A.; Imbiriba, T.; Bermudez, J.C.M. Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability. IEEE Trans. Image Process. 2020, 29, 116–127. [Google Scholar] [CrossRef] [Green Version]
  17. Shao, Z.; Cai, J. Remote Sensing Image Fusion With Deep Convolutional Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1656–1669. [Google Scholar] [CrossRef]
  18. Yang, B.; Li, S. Multifocus Image Fusion and Restoration With Sparse Representation. IEEE Trans. Instrum. Meas. 2010, 59, 884–892. [Google Scholar] [CrossRef]
  19. Merianos, I.; Mitianoudis, N. Multiple-Exposure Image Fusion for HDR Image Synthesis Using Learned Analysis Transformations. J. Imaging 2019, 5, 32. [Google Scholar] [CrossRef] [Green Version]
  20. Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image Fusion With Convolutional Sparse Representation. IEEE Trans. Signal Process. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
  21. Mitianoudis, N.; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion 2007, 8, 131–142. [Google Scholar] [CrossRef] [Green Version]
  22. Kumar, B.K.S. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 2013, 7, 1125–1143. [Google Scholar] [CrossRef]
  23. Rahman, M.A.; Lin, S.C.F.; Wong, C.Y.; Jiang, G.; Liu, S.; Kwok, N. Efficient colour image compression using fusion approach. Imaging Sci. J. 2016, 64, 166–177. [Google Scholar] [CrossRef]
  24. Naidu, V.P.S.; Raol, J.R. Pixel-level Image Fusion using Wavelets and Principal Component Analysis. Def. Sci. J. 2008, 58, 338–352. [Google Scholar] [CrossRef]
  25. Burt, P.; Adelson, E. The Laplacian Pyramid as a Compact Image Code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  26. Adelson, E.H.; Anderson, C.H.; Bergen, J.R.; Burt, P.J.; Ogden, J.M. Pyramid methods in image processing. RCA Eng. 1984, 29, 33–41. [Google Scholar]
  27. Zhao, W.; Lu, H.; Wang, D. Multisensor Image Fusion and Enhancement in Spectral Total Variation Domain. IEEE Trans. Multimed. 2018, 20, 866–879. [Google Scholar] [CrossRef]
  28. Rockinger, O. Image sequence fusion using a shift-invariant wavelet transform. In Proceedings of the International Conference on Image Processing, Santa Barbara, CA, USA, 26–29 October 1997; Volume 3, pp. 288–291. [Google Scholar]
  29. Li, H.; Manjunath, B.; Mitra, S. Multisensor Image Fusion Using the Wavelet Transform. Graph. Models Image Proc. 1995, 57, 235–245. [Google Scholar] [CrossRef]
  30. Tian Pu, G.N. Contrast-based image fusion using the discrete wavelet transform. Opt. Eng. 2000, 39. [Google Scholar] [CrossRef]
  31. Wang, W.W.; Shui, P.L.; Feng, X.C. Variational Models for Fusion and Denoising of Multifocus Images. IEEE Trans. Signal Process. 2008, 15, 65–68. [Google Scholar] [CrossRef]
  32. Wan, T.; Canagarajah, N.; Achim, A. Segmentation-driven Image Fusion Based on Alpha-stable Modeling of Wavelet Coefficients. IEEE Trans. Multimed. 2009, 11, 624–633. [Google Scholar] [CrossRef] [Green Version]
  33. Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense SIFT. Inf. Fusion 2015, 23, 139–155. [Google Scholar] [CrossRef]
  34. Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
  35. Liu, Z.; Chai, Y.; Yin, H.; Zhou, J.; Zhu, Z. A Novel Multi-focus Image Fusion Approach Based on Image Decomposition. Inf. Fusion 2017, 35, 102–116. [Google Scholar] [CrossRef]
  36. Cao, L.; Jin, L.; Tao, H.; Li, G.; Zhuang, Z.; Zhang, Y. Multi-Focus Image Fusion Based on Spatial Frequency in Discrete Cosine Transform Domain. IEEE Trans. Signal Process. 2015, 22, 220–224. [Google Scholar] [CrossRef]
  37. Li, S.; Kwok, J.; Wang, Y. Combination of images with diverse focuses using the spatial frequency. Inf. Fusion 2001, 2, 169–176. [Google Scholar] [CrossRef]
  38. Li, S.; Yang, B. Multifocus Image Fusion Using Region Segmentation and Spatial Frequency. Image Vis. Comput. 2008, 26, 971–979. [Google Scholar] [CrossRef]
  39. Abhyankar, M.; Khaparde, A.; Deshmukh, V. Spatial domain decision based image fusion using superimposition. In Proceedings of the 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), Okayama, Japan, 26–29 June 2016; pp. 1–6. [Google Scholar]
  40. Tian, J.; Chen, L. Adaptive Multi-focus Image Fusion Using a Wavelet-based Statistical Sharpness Measure. IEEE Trans. Signal Process. 2012, 92, 2137–2146. [Google Scholar] [CrossRef]
  41. Nunez, J. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1204–1211. [Google Scholar] [CrossRef] [Green Version]
  42. Naidu, V.; Elias, B. A Novel Image Fusion Technique using DCT based Laplacian Pyramid. Int. J. Inven. Eng. Sci. (IJIES) 2013, 1, 1–9. [Google Scholar]
  43. Li, S.; Yang, B.; Hu, J. Performance Comparison of Different Multi-resolution Transforms for Image Fusion. Inf. Fusion 2011, 12, 74–84. [Google Scholar] [CrossRef]
  44. Li, S.; Yang, B. Multifocus Image Fusion by Combining Curvelet and Wavelet Transform. Pattern Recognit. Lett. 2008, 29, 1295–1301. [Google Scholar] [CrossRef]
  45. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  46. Wright, J.; Ma, Y.; Mairal, J.; Sapiro, G.; Huang, T.S.; Yan, S. Sparse Representation for Computer Vision and Pattern Recognition. Proc. IEEE 2010, 98, 1031–1044. [Google Scholar] [CrossRef] [Green Version]
  47. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef] [Green Version]
  48. Qiu, X.; Li, M.; Zhang, L.; Yuan, X. Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 2019, 72, 35–46. [Google Scholar] [CrossRef]
  49. Li, S.; Kang, X.; Hu, J. Image Fusion With Guided Filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar]
  50. Li, S.; Kang, X.; Hu, J.; Yang, B. Image Matting for Fusion of Multi-focus Images in Dynamic Scenes. Inf. Fusion 2013, 14, 147–162. [Google Scholar] [CrossRef]
  51. Wang, J.; Cohen, M.F. Image and Video Matting: A Survey; Foundations and Trends in Computer Graphics and Vision; Now Publishers Inc.: Delft, The Netherlands, 2007; Volume 3, pp. 97–175. [Google Scholar]
  52. Shreyamsha Kumar, B.K. Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 2015, 9, 1193–1204. [Google Scholar] [CrossRef]
  53. Bai, X.; Zhang, Y.; Zhou, F.; Xue, B. Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf. Fusion 2015, 22, 105–118. [Google Scholar] [CrossRef]
  54. Guo, D.; Yan, J.; Qu, X. High quality multi-focus image fusion using self-similarity and depth information. Opt. Commun. 2015, 338, 138–144. [Google Scholar] [CrossRef]
  55. Qu, X.; Hu, C.; Yan, J. Image fusion algorithm based on orientation information motivated Pulse Coupled Neural Networks. In Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 2437–2441. [Google Scholar]
  56. Qu, X.-B.; Yan, J.-W.; Xiao, H.-Z.; Zhu, Z.-Q. Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain. Acta Autom. Sin. 2008, 34, 1508–1514. [Google Scholar] [CrossRef]
  57. Zhang, Y.; Bai, X.; Wang, T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. Fusion 2017, 35, 81–101. [Google Scholar] [CrossRef]
  58. Zhou, Z.; Li, S.; Wang, B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf. Fusion 2014, 20, 60–72. [Google Scholar] [CrossRef]
  59. Paul, S.; Sevcenco, I.S.; Agathoklis, P. Multi-Exposure and Multi-Focus Image Fusion in Gradient Domain. J. Circuits Syst. Comput. 2016, 25, 1650123. [Google Scholar] [CrossRef] [Green Version]
  60. Farid, M.S.; Mahmood, A.; Al-Maadeed, S.A. Multi-focus image fusion using Content Adaptive Blurring. Inf. Fusion 2019, 45, 96–112. [Google Scholar] [CrossRef]
  61. Liu, Y.; Wang, Z. Dense SIFT for Ghost-free Multi-exposure Fusion. J. Vis. Commun. Image Represent. 2015, 31, 208–224. [Google Scholar] [CrossRef]
  62. Tao, Q.; Veldhuis, R. Threshold-optimized decision-level fusion and its application to biometrics. Pattern Recognit. 2009, 42, 823–836. [Google Scholar] [CrossRef]
  63. Durrant-Whyte, H.; Henderson, T.C. Multisensor Data Fusion. In Springer Handbook of Robotics; Springer: Berlin, Germany, 2008; pp. 585–610. [Google Scholar]
  64. Varshney, P.K. Multisensor Data Fusion. Intelligent Problem Solving. Methodologies and Approaches; Logananthara, R., Palm, G., Ali, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–3. [Google Scholar]
  65. Tian, J.; Chen, L. Multi-focus image fusion using wavelet-domain statistics. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 1205–1208. [Google Scholar]
  66. Liu, Y.; Liu, S.; Wang, Z. A General Framework for Image Fusion Based on Multi-scale Transform and Sparse Representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
  67. Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. Multi-focus image fusion for visual sensor networks in DCT domain. Comput. Electr. Eng. 2011, 37, 789–797. [Google Scholar] [CrossRef]
  68. Martorell, O.; Sbert, C.; Buades, A. Ghosting-free DCT based multi-exposure image fusion. Signal Process. Image Commun. 2019, 78, 409–425. [Google Scholar] [CrossRef]
  69. Wikipedia Contributors. Discrete Cosine Transform Wikipedia, The Free Encyclopedia. 2020. Available online: https://en.wikipedia.org/wiki/Discrete_cosine_transform (accessed on 30 June 2020).
  70. Ma, X.; Hu, S.; Liu, S.; Fang, J.; Xu, S. Multi-focus image fusion based on joint sparse representation and optimum theory. Signal Process. Image Commun. 2019, 78, 125–134. [Google Scholar] [CrossRef]
  71. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  72. Shen, R.; Cheng, I.; Shi, J.; Basu, A. Generalized Random Walks for Fusion of Multi-Exposure Images. IEEE Trans. Image Process. 2011, 20, 3634–3646. [Google Scholar] [CrossRef]
  73. Lippman, D. Math in Society; Lippman, D., Ed.; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2012. [Google Scholar]
  74. Emerson, P. The original Borda count and partial voting. Soc. Choice Welf. 2013, 40, 353–358. [Google Scholar] [CrossRef]
  75. Emerson, P. From Majority Rule to Inclusive Politics; Springer: Berlin, Germany, 2016. [Google Scholar]
  76. Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
  77. Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 94–109. [Google Scholar] [CrossRef]
  78. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 313–315. [Google Scholar] [CrossRef] [Green Version]
  79. Hossny, M.; Nahavandi, S.; Creighton, D. Comments on ‘Information measure for performance of image fusion’. Electron. Lett. 2008, 44, 1066–1067. [Google Scholar] [CrossRef] [Green Version]
  80. Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A New Image Fusion Performance Metric Based on Visual Information Fidelity. Inf. Fusion 2013, 14, 127–135. [Google Scholar] [CrossRef]
  81. Wang, Q.; Shen, Y.; Jin, J. 19—Performance evaluation of image fusion techniques. In Image Fusion: Algorithms and Applications; Stathaki, T., Ed.; Academic Press: Oxford, UK, 2008; pp. 469–492. [Google Scholar]
  82. Cvejic, N.; Canagarajah, C.; Bull, D. Image fusion metric based on mutual information and Tsallis entropy. Electron. Lett. 2006, 42, 626–627. [Google Scholar] [CrossRef]
  83. Zheng, Y.; Essock, E.A.; Hansen, B.C.; Haun, A.M. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf. Fusion 2007, 8, 177–192. [Google Scholar] [CrossRef]
  84. Wang, P.W.; Liu, B. A novel image fusion metric based on multi-scale analysis. In Proceedings of the 2008 9th International Conference on Signal Processing, Beijing, China, 26–29 October 2008; pp. 965–968. [Google Scholar]
  85. Liu, Z.; Forsyth, D.S.; Laganière, R. A feature-based metric for the quantitative evaluation of pixel-level image fusion. Comput. Vis. Image Underst. 2008, 109, 56–68. [Google Scholar] [CrossRef]
  86. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  87. Yang, C.; Zhang, J.-Q.; Wang, X.-R.; Liu, X. A novel similarity based quality metric for image fusion. Inf. Fusion 2008, 9, 156–160. [Google Scholar] [CrossRef]
  88. Chen, H.; Varshney, P.K. A human perception inspired quality metric for image fusion based on regional information. Inf. Fusion 2007, 8, 193–207. [Google Scholar] [CrossRef]
  89. Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 2009, 27, 1421–1432. [Google Scholar] [CrossRef]
Figure 1. General image fusion process using two multi-focus images.
Figure 1. General image fusion process using two multi-focus images.
Jimaging 06 00060 g001
Figure 2. Categorization of multi-focus image fusion algorithms. The algorithms are divided into two main categories: Spatial domain based algorithms and Transform domain based algorithms, which are divided into further sub-categories. Spatial domain based algorithms: Pixel based Fusion [20,34,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60], Feature based Fusion [21,24,33,61], Decision based Fusion [39,62,63,64]. Transform domain based algorithms: Wavelets based Fusion [24,40,44,56,65,66], Curvelet based Fusion [18,44], DCT based Fusion [22,42,67,68].
Figure 2. Categorization of multi-focus image fusion algorithms. The algorithms are divided into two main categories: Spatial domain based algorithms and Transform domain based algorithms, which are divided into further sub-categories. Spatial domain based algorithms: Pixel based Fusion [20,34,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60], Feature based Fusion [21,24,33,61], Decision based Fusion [39,62,63,64]. Transform domain based algorithms: Wavelets based Fusion [24,40,44,56,65,66], Curvelet based Fusion [18,44], DCT based Fusion [22,42,67,68].
Jimaging 06 00060 g002
Figure 3. Sample multi-focus images from Grayscale dataset.
Figure 3. Sample multi-focus images from Grayscale dataset.
Jimaging 06 00060 g003
Figure 4. Sample images from Lytro multi-focus image fusion dataset.
Figure 4. Sample images from Lytro multi-focus image fusion dataset.
Jimaging 06 00060 g004
Figure 5. Visual comparison compared multi-focus image fusion methods on the fence image set of Lytro dataset.
Figure 5. Visual comparison compared multi-focus image fusion methods on the fence image set of Lytro dataset.
Jimaging 06 00060 g005
Figure 6. Visual comparison compared multi-focus image fusion methods on clock image set of the Grayscale dataset.
Figure 6. Visual comparison compared multi-focus image fusion methods on clock image set of the Grayscale dataset.
Jimaging 06 00060 g006
Figure 7. Performance of different image fusion algorithms on the Grayscale dataset using objective fusion quality assessment metrics.
Figure 7. Performance of different image fusion algorithms on the Grayscale dataset using objective fusion quality assessment metrics.
Jimaging 06 00060 g007
Figure 8. Performance of different image fusion algorithms on the Lytro dataset using objective fusion quality assessment metrics.
Figure 8. Performance of different image fusion algorithms on the Lytro dataset using objective fusion quality assessment metrics.
Jimaging 06 00060 g008
Figure 9. Borda count results on both dataset. The number at the column top shows its rank among the 24 compared methods.
Figure 9. Borda count results on both dataset. The number at the column top shows its rank among the 24 compared methods.
Jimaging 06 00060 g009
Figure 10. Execution time comparison of the compared methods.
Figure 10. Execution time comparison of the compared methods.
Jimaging 06 00060 g010
Table 1. List of multi-focus image fusion (MIF) algorithms used in this study. Category represents the category of the algorithm: S for spatial and T for transform domain-based method, the the sub-category of the algorithm is provided in parenthesis.
Table 1. List of multi-focus image fusion (MIF) algorithms used in this study. Category represents the category of the algorithm: S for spatial and T for transform domain-based method, the the sub-category of the algorithm is provided in parenthesis.
MethodCategoryReference
CSRS (Pixel-based)Convolutional Sparse Representation [20]
PCAS (Feature-based)Principle Component Analysis [24]
DSIFTS (Feature-based)Dense SIFT [33]
DCTLPT (DCT)Discrete Cosine Transform with Laplacian Pyramid [42]
WSSMT (DWT)Wavelet based Statistical Sharpness Measure [65]
ICAS (Feature-based)Independent Component Analysis [21]
DCHWTT (DCT)Discrete Cosine Harmonic Wavelet Transform [22]
GFFS (Pixel-based)Guided filtering [49]
IFMS (Pixel-based)Image matting [50]
GIFS (Pixel-based)Guided filtering [67]
CBFS (Pixel-based)Image fusion based on cross bilateral filter [52]
DCTVT (DCT)DCT with variance [66,67]
IFGDS (Pixel-based)Gradient domain [59]
MSMFMS (Pixel-based)Multi-scale Morphological Focus Measure [57]
MSTSRT (DWT)Multi-scale Transform and Sparse Representation [66]
MWGFS (Pixel-based)Multi-scale weighted gradient-based fusion [58]
OIS (Pixel-based)Orientation Information and Pulse Coupled Neural Net. [55]
NSCTT (Contourlet)Neural Net. in Nonsubsampled Contourlet Transform [56]
QTDS (Pixel-based)Quadtree-based multi-focus image fusion [53]
IFCT (DCT)DCT Domain and Harmonic wavelet [22]
MSMFMgS (Pixel-based)Boundary based focus measurement [57]
SSDIS (Pixel-based)Fusion using self-similarity and depth information [54]
DSIFT2S (Feature-based)Dense SIFT for ghost-free multi-exposure fusion [61]
GRWS (Pixel-based)Generalized Random Walks for Fusion [72]
Table 2. Objective image fusion quality assessment metrics used in performance evaluation of the compared methods.
Table 2. Objective image fusion quality assessment metrics used in performance evaluation of the compared methods.
Metric Abbr.Metric Reference
Information theory based metrics
Q M Normalized mutual information [78,79]
Q N C I E Non-linear correlation metric [81]
Q T E Tselli’s entropy [82]
V I F F Visual information fidelity metric [80]  
Feature based metrics
Q G Gradient based metric [76]
Q S F Spatial frequency based metric [83]
Q M Multi-scale metric [84]
Q P Moments based metric [85]  
Structural similarity based metrics
Q S Variance based metric [86]
Q Y Yang’s metric [87]  
Human perception based metrics
Q C V Chen Varshney’s metric [88]
Q C B Chen Blum’s metric [89]
Table 3. Summary of performance evaluation using information based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
Table 3. Summary of performance evaluation using information based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
MetricGrayscaleLytro
Q M I (1) MSMFMg(2) DSIFT(3) QTD(1) QTD(2) MSMFM(3) SSDI
Q N C I E (1) OI(2) MSMFMg(3) QTD(1) OI(2) MSMFM(3) QTD
Q T E (1) OI(2) DSIFT2(3) NSCT(1) ICA(2) GFF(3) DSIFT2
V I F F (1) IFGD(2) ICA(3) MSTSR(1) IFGD(2) MSTSR(3) QTD
Table 4. Summary of performance evaluation using feature based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
Table 4. Summary of performance evaluation using feature based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
MetricGrayscaleLytro
Q G (1) MSMFMg(2) GIF(3) QTD(1) MSMFM(2) DSIFT(3) QTD
Q M (1) DSIFT(2) QTD(3) DCTV(1) QTD(2) DSIFT(3) DCTV
Q S F (1) IFGD(2) ICA(3) DCTV(1) DSIFT(2) DCTV(3) IFM
Q P (1) GIF(2) DSIFT(3) MSMFM(1) GFF(2) GIF(3) MSMFM
Table 5. Summary of performance evaluation using structural similarity based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
Table 5. Summary of performance evaluation using structural similarity based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
MetricGrayscaleLytro
Q S (1) CBF(2) DSIFT2(3) ICA(1) ICA(2) CBF(3) MSTSR
Q Y (1) GIF(2) MSMFMg(3) QTD(1) MSMFM(2) QTD(3) DSIFT
Table 6. Summary of performance evaluation using human perception based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
Table 6. Summary of performance evaluation using human perception based metrics. For each metric, the numbers (1), (2), and (3) show the three best performing algorithms, respectively.
MetricGrayscaleLytro
Q C V (1) IFGD(2) OI(3) DCTV(1) IFGD(2) OI(3) PCA
Q C B (1) GIF(2) MSMFMg(3) QTD(1) QTD(2) MSMFM(3) DSIFT
Table 7. Borda count results on test datasets. BC represents the cumulative Borda count score of each algorithm. The best five algorithms are marked in bold.
Table 7. Borda count results on test datasets. BC represents the cumulative Borda count score of each algorithm. The best five algorithms are marked in bold.
MetricCBFCSRDCHWTDCTLPDCTVDSIFTGFFGIFICAIFGDIFMPCAWSSMGRWDSIFT2MWGFMSTSRMSMFMNSCTOIMSMFMgIFCSSDIQTD
Grayscale dataset:
BC131112397117822718221015197207865852150163170237143172238106188232
Rank161724219485131962022231412112151011873
Lytro dataset:
BC1491155653168226197206145761995410381158182176217134136224111207227
Rank131722241128614217231920129104161531851
Table 8. Summary of qualitative and quantitative performance evaluation results. Only the five best performing methods are considered. In the case of quantitative evaluation, the metrics are ranked from high to low. Whereas in the case of qualitative evaluations, this ranking is not vivid—few algorithms might be indistinguishable by the visual evaluation.
Table 8. Summary of qualitative and quantitative performance evaluation results. Only the five best performing methods are considered. In the case of quantitative evaluation, the metrics are ranked from high to low. Whereas in the case of qualitative evaluations, this ranking is not vivid—few algorithms might be indistinguishable by the visual evaluation.
DatasetEvaluationMethods
GSQualitativeMSMFM, MSMFMg, SSDI, QTD, DSIFT
QuantitativeMSMFMg, MSMFM, QTD, DSIFT, GIF
LytroQualitativeQTD, DSIFT, MSTSR, NSCT, GFF
QuantitativeQTD, DSIFT, MSMFMg, MSMFM, SSDI
BothQualitativeMSMFM, DSIFT, QTD, MSMFMg, SSDI
QuantitativeMSMFMg, QTD, MSMFM, DSIFT, GIF
Table 9. Five best performing MIF algorithms in each category; spatial domain and transform domain.
Table 9. Five best performing MIF algorithms in each category; spatial domain and transform domain.
CategoryMethodRank
Spatial DomainMSMFMg1
QTD2
MSMFM3
DSIFT4
GIF5
Transform DomainDCTV9
MSTSR10
NSCT16
DCHWT18
WSSM20
Table 10. The best performing MIF algorithm in each group.
Table 10. The best performing MIF algorithm in each group.
CategoryGroupMethodRank
Spatial DomainPixel based FusionMSMFMg1
Featurebased FusionDSIFT4
Transform DomainDCT based FusionDCTV9
Wavelets based FusionMSTSR10
Contourlet/Curvelet based FusionNSCT16

Share and Cite

MDPI and ACS Style

Zafar, R.; Farid, M.S.; Khan, M.H. Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library. J. Imaging 2020, 6, 60. https://doi.org/10.3390/jimaging6070060

AMA Style

Zafar R, Farid MS, Khan MH. Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library. Journal of Imaging. 2020; 6(7):60. https://doi.org/10.3390/jimaging6070060

Chicago/Turabian Style

Zafar, Rabia, Muhammad Shahid Farid, and Muhammad Hassan Khan. 2020. "Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library" Journal of Imaging 6, no. 7: 60. https://doi.org/10.3390/jimaging6070060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop