A new image fusion performance metric based on visual information fidelity
Highlights
► We propose a new image fusion metric (VIFF) based on visual information fidelity. ► What is a fair performance comparison among image fusion metrics is discussed. ► VIFF is compared with 8 popular image fusion metrics on a database. ► VIFF shows highest predictive performance over others. ► An approximate estimation of fusion metric’s time complexity is given.
Introduction
Multiple imaging systems are commonly used to improve situational awareness of a complex scene because they capture the scene without losing information and then integrate effective visual information from source images together to produce a high-quality result. This integration process is called fusion, which is accomplished by fusion algorithms. Because an increasing number of fusion approaches have been proposed [1] in recent years, evaluating fusion algorithm performance has become an important issue.
The conventional way to evaluate fusion performance is to score the fused images subjectively by a number of trained observers, which requires significant economic consumption and cannot be used in an automatic system. As a result, objective fusion metric has become the primary focus because it can automatically predict the performance of fusion algorithms.
Mutual information (MI) given by Qu et al. [2] was the most commonly used objective metric for image fusion. Hossny et al. [3] revised the MI metric and proposed the Normalized MI (NMI) metric. Both metrics took advantage of information theory and demonstrated the significance of the theory. Xydeas and Petrovic [4] proposed an edge-based fusion performance measure QE by calculating edge strength and orientation between the source and fused image. Xydeas’s work showed high performance for fusion assessment. The structural similarity index measure (SSIM) [8] which showed better performance in image quality assessment was employed for fusion performance metrics. For instance, Piella [5] used weighted SSIMs between the source images and fused image in each block to determine the weighted fusion quality (WFQ) and edge dependent fusion quality index (EDFQI). Cvejic et al. [6] improved Piella’s work and determined weighted parameters based on block similarity between two source-fused image pairs. Unlike considering both gray and edge information in Piella’s work, Yang’s et al. [7] metric only focused on the gray information in images. In Yang’s work, WFQ was not used to predict fusion performance exclusively, while the maximum SSIM of each block was utilized as a substitute for WFQ values under certain conditions. Chen and Varshney [9] proposed a new fusion metric based on the human visual system (HVS). The method employed the contrast sensitivity function (CSF) on the entire image and then used the local spatial information transfer on a region-by-region basis. Chen and Varshney studied the best parameter settings for their algorithm and assessed their algorithm using different types of fused images. Chen and Blum [10] presented a new fusion metric based on HVS. Their work did not rely on edge information but rather on local contrast in an empirical CSF filtered image. They evaluated their metric using a night vision image test set and achieved high performance.
Because the assessment of an image fusion scheme is strongly correlated to the image quality, the development of image quality has a great impact on fusion metrics. This paper presents a new image fusion metric based on visual information fidelity (VIF) that has shown high performance for image quality prediction. Both predictive performance and computational complexity are improved in this metric. Section 2 briefly introduces VIF, and Section 3 describes the principle and framework of our fusion metric algorithm. Experimental results are discussed in Section 4, while the performance of the proposed algorithm is compared with some commonly used fusion metrics in Petrovic’s subjective test database [11]. Finally, conclusions are drawn in Section 5.
Section snippets
Principle of visual information fidelity
This section reviews visual information fidelity, which is an effective full reference image quality metric based on natural scene statistics (NSS) theory. The principle of VIF is shown in Fig. 1.
As depicted in Fig. 1, VIF first decomposes the natural image into several sub-bands and parses each sub-band into blocks. Then, VIF measures the visual information by computing mutual information in the different models in each block and each sub-band. Finally, the image quality value is measured by
Visual information fidelity for fusion (VIFF) performance metric
In VIF, I(Ci, Ei) is equivalent to the Signal Noise Ratio (SNR) at the ith spatial position of a sub-band when only considering visual noise, while I(Ci, Fi) is another SNR at the same position and sub-band with both distortion and visual noise. Thus, the principle of VIF can be recounted in the following four steps: first, VIF decomposes reference and test images into several sub-bands; second, VIF measures spatially local SNR with and without distortion information of images at multiple scales;
Validation experiment and discussion
There are several problems in the current comparisons of existing metric algorithms for image fusion. For example, some works [2], [3], [5], [6], [7] compared their evaluation algorithms with readers’ subjective judgments or other common objective image quality assessment methods (such as PSNR). The limitations are listed as follows:
- 1.
Direct subjective perception.
It is not precise to assess fusion metrics using subjective perception. The accuracy and consistency of subjective perception are
Conclusions
A new image fusion assessment metric, VIFF, is presented in this paper. VIFF first decomposes the source and fused images. Then, VIFF utilizes the models in VIF (GSM model, Distortion model and HVS model) to capture visual information from the two source-fused pairs. With the help of an effective visual information index, VIFF measures the effective visual information of the fusion in all blocks in each sub-band. Finally, the assessment result is calculated by integrating all the information in
Acknowledgements
The authors thank the editors and anonymous reviewers for their comments and suggestions. This work was supported jointly by the National Natural Science Foundation of China (61004088),the Key Foundation for Basic Research from Science and Technology Commission of Shanghai (09JC1408000) and the National Basic Research Program (973 Program: 6131010306).
References (21)
- et al.
A novel similarity based quality metric for image fusion
Information Fusion
(2008) - et al.
A human perception inspired quality metric for image fusion based on regional information
Information Fusion
(2007) Subjective tests for image fusion evaluation and objective metric validation
Information Fusion
(2007)- et al.
Random cascades on wavelet trees and their use in analyzing and modeling natural images
Applied and Computational Harmonic Analysis
(2001) - M.I. Smith, J.P. Heather, Review of image fusion technology in 2005, in: Proceedings of SPIE, 2005, pp....
- et al.
Information measure for performance of image fusion
Electronics Letters
(2002) - et al.
Comments on ‘Information measure for performance of image fusion’
Electronics Letters
(2008) - et al.
Objective image fusion performance measure
Electronics Letters
(2000) - G. Piella, A new quality metric for image fusion, in: Proceedings of International Conference on Image Processing,...
- N. Cvejic, A. Łoza, D. Bull, N. Canagarajah, A similarity metric for assessment of image fusion algorithms, in:...
Cited by (603)
A semantic-driven coupled network for infrared and visible image fusion
2024, Information FusionSCGRFuse: An infrared and visible image fusion network based on spatial/channel attention mechanism and gradient aggregation residual dense blocks
2024, Engineering Applications of Artificial IntelligenceASFusion: Adaptive visual enhancement and structural patch decomposition for infrared and visible image fusion
2024, Engineering Applications of Artificial IntelligenceLMDFusion: A lightweight infrared and visible image fusion network for substation equipment based on mask and residual dense connection
2024, Infrared Physics and TechnologySDTFusion: A split-head dense transformer based network for infrared and visible image fusion
2024, Infrared Physics and TechnologyMDAN: Multilevel dual-branch attention network for infrared and visible image fusion
2024, Optics and Lasers in Engineering