Effective contrast enhancement method for color night vision

https://doi.org/10.1016/j.infrared.2011.10.007Get rights and content

Abstract

Image fusion refers to the techniques that integrate complementary information from multiple image sensors’ data in a way that makes the new images more suitable for human visual perception. The paper focuses on the low color contrast problem of linear fusion algorithms with color transfer method. Firstly, the contrast of infrared and visible images is enhanced using local histogram equalization and median filter. Then the two enhanced images are fused into the three components of a Lab image in terms of a simple linear fusion strategy. To enhance the color contrast between the target and the background, the scaling factor is introduced into the transferring equation in the b channel. Experimental results based on three different data sets show that the hot and cold targets are all popped out with intense colors while the background details present natural color appearance. Target detection experiments through target recognition area, detection rate, target-background discrimination also show that the presented method has a better performance than the former methods.

Highlights

► Mapping night-vision images to L channel will enlarge the chromatism. ► Introducing a scaling factor into the transferring equation will enhance the color contrast. ► Some objective metrics including recognition area, detection rate and color distances are provided. ► Colorizing night-vision imagery directly in Lab space will bring richer color information.

Introduction

The availability of modern night vision systems, like image intensifiers and thermal cameras, enables operations during night and in adverse weather conditions. The two most common night-time imaging systems display either emitted infrared radiation (IR) or dim reflected light. Generally, the visible and IR images are complementary, i.e. the IR image has better hot contrast but less details than visible image, the visible image has much more plenty high-frequency information but worse target contrast especially under bad luminance condition. Thus, techniques for fusing the two images should be employed in order to provide a compact representation of the scene with increased interpretation capabilities. In recent years, fusion of visible and IR images gets increasing concerns. Fusion methods include gray fusion (such as principal component analysis, contrast modulation, pyramid modulation, wavelet transform), and color fusion. In principle, color imagery has several benefits over monochrome imagery for surveillance, reconnaissance, and security applications. The color level which human can distinguish is about hundreds times than gray level and many experiments show that color fusion may improve feature contrast, which allows for better scene segmentation and object detection [1], [2]. So color fusion is becoming a more and more important research field and a number of color fusion methods have been proposed [3], [4], [5], [6].

One key topic of the color fusion is color constancy. It has been shown that the lack of color constancy with inappropriate color mapping may hinder human performance [13]. In 2003, Toet [7], [8] originally apply color transfer technology that ever been used in color alteration among visible images [9] into multi-band color night vision. This method matches the first order statistical properties (mean and standard deviation) of night vision imagery to those of a target daylight color image. As a result, the color appearance of the colorized night vision image resembles that of the natural target image. But color space transformations between RGB and LMS, LMS and ταβ, logarithm and exponent operation occupy much time. And this method does not provide color constancy for dynamic imagery. To overcome above shortcomings, Hogervorst and Toet [2] introduce a simple and fast method to consistently apply natural daytime colors to multi-band night vision imagery. Their idea is easily implemented using standard color lookup table techniques to optimize the match between the false color fused image and the reference image. Once the lookup-table has been derived, the color mapping can be deployed in real-time multi-band image sequences of similar scenes. But this method requires multi-band images and a daytime reference image that are in exact correspondence. And their methods process all three channel of the color space with the same linear mapping in the color transfer step, resulting in low contrast between the target and the background. So targets that are clear in the IR image sometimes may become invisible and are difficult to detect, that may hinder target detection tasks [13].

While popping-out targets is another key topic of color night vision, which enhances human visual capabilities in discriminating features form their background and is essential to target detection tasks. However, to date, little attention has been paid to this problem. In 2007 [10], Wang proposed a nonlinear transfer method based on local mean value of the IR after nonlinear fusion. This method can pop out the hot targets with intense red color while the background rendered natural color appearance. Zheng [11] and Ma [12] proposed a novel local color night vision algorithm based on image segmentation, recognition and local color transfer to enhance the color mapping effect. However, these methods are even more expensive than Toet’s original color transfer [7], [8], since they involve time-consuming procedures such as nonlinear diffusion, local recognition, local comparisons and image segmentation. In addition, the nonlinear scheme based on Gaussians convolution operation sometimes brings blur effects. This may hinder situational awareness. In 2010, Yin et al. [13] presented one color contrast enhancement method for color night vision. They introduce a ratio of local to global divergence of the IR image to improve the color contrast in the color transfer process after a simple linear fusion. The experiments show that this method can bring an improvement in the hit rate for target detection than the global statistic method.

But due to the bad weather or environment, the night vision imagery is degraded by the high noise or low contrast, while the above works rarely consider this problem. In addition, Yin’s method can get improvement in the target detection, but cold targets are not highlighted effectively and some backgrounds have unnaturally colors in the final fused color image, i.e. road and soil present off-white (seen in Fig. 3c). To get better color night vision effects, here we will present a new simple and effective color fusion method in the Lab color space, which can enhance image quality and improve the target detection ability when bringing natural colors. The system is shown in Fig. 1. Based on the existed works, visible and IR images are preprocessed using contrast enhancement and median filter. Consequently, the two enhanced images are fused into the three components of an Lab image by means of a simple linear fusion strategy. Different from the global statistic method, the transferring equation in the b channel is amended by a scaling factor, which changes according to the distance between the current luminance value and the mean value. In this way the final rendering image has overall clear and natural day-time color appearance when appropriate reference images are selected and the targets are easy to be recognized. This may help observers to improve situational awareness and reduce detection and recognition time.

Section snippets

Linear image fusion method

Our fusion approach includes two steps: (1) pre-processing and (2) linear fusion.

Nonlinear color transfer

The false color fused images of ten have unnatural color appearance and lack of color constancy. In this section, a simple technique is described in order to pop out targets and make the backgrounds full of natural colors. Different from literatures [2,7–13], the color transfer is directly applied to the device independent Lab (Lab, CIE1976) color space. And to extrude cold and hot targets, b channel will be enhanced by a scaling factor.

In a color transfer technique, the fused color image

Colorizing results

To validate the effectiveness of our method, three image sets are fused. For comparison, we also obtain the color fused images by Toet’s method without enhancement [7] and by Yin’s enhancement method [13]. The three image sets are shown in Fig. 2. Fig. 2a is IR images and Fig. 2b is corresponding visible images.

Fig. 3 is the comparing experiment among our method, Toet’s and Yin’s methods. Fig. 3b–d are the colorizing results brought by Toet’s global statistic method, Yin’s method in YUV space

Discussions

Human visual inspection will be a good choice to evaluate colorized images. However, the reproduction of subjective tests is often time-consuming and difficult, while the exact same conditions for the test cannot be guaranteed. Unfortunately, there is no effective and well-accepted objective metric for color fusion schemes. However, as we aim at improving the identification ability of the targets by pre-processing and enhancing methods, we can roughly evaluate different color fusion methods

Conclusions

In this paper, we discuss a simple and effective color contrast enhancement method for color night vision. Before fusion, local histogram equalization and median filter are introduced to improve the visible and IR images’ quality. Then a simple linear fusion method is proposed. In order to get natural colors and improve the target detection ability, an amended global color transfer method is applied. Under the control of the scaling factors, the targets can be popped out and meanwhile the

Acknowledgements

This study is supported by China Postdoctoral Science Foundation (Grant No. 20110491415) and Technology Project of Civil Aviation Administration of China (Grant No. MHRD201124). Many thanks to Alexander Toet and TNO Human Factors Research Institute for providing the source IR and visible image sequences that are publicly available online at http://www.imagefusion.org.

References (15)

There are more references available in the full text version of this article.

Cited by (9)

  • A survey of infrared and visual image fusion methods

    2017, Infrared Physics and Technology
    Citation Excerpt :

    Thus both hot and cold targets could be enhanced to keep the details of the background [69]. Qian et al. discussed a simple fusion method of IR and VI image for night color vision, local histogram equalization was introduced to improve IR and VI images’ quality before fusion operation, an amended global color transform method was adopted to get the natural colors and improve the target detection ability, the targets were popped out and meanwhile the backgrounds were rendered with natural colors by stretch factor [74]. And another method also was introduced in Ref. [75].

  • An object tracking method based on local matting for night fusion image

    2014, Infrared Physics and Technology
    Citation Excerpt :

    During tracking, if the size of the target is W × H, then a surrounding region with size 1.5W × 1.5H is considered as its neighboring background, where discriminative color regions are extracted. To validate the effectiveness of our method and discuss the parameters selection, we perform our experiments on 3 fusion sequences produced by our former work [19,20]. Table 1 shows comparisons of mean error rates for different thresholds’ setting.

  • Color contrast enhancement for color night vision based on color mapping

    2013, Infrared Physics and Technology
    Citation Excerpt :

    Simpler color contrast schemes were obtained by using linear fusion and nonlinear color transferring [13]. Yin and Qian [13,19] introduced a ratio of local to global divergence into color transferring equation. The hot and cool targets both can be popped out with intense colors.

  • Color night vision method combining NSST with color contrast enhancement

    2016, Guangdian Gongcheng/Opto-Electronic Engineering
View all citing articles on Scopus
View full text