Next Article in Journal
Effective Privacy-Preserving Collection of Health Data from a User’s Wearable Device
Previous Article in Journal
Investigation of Steel Fiber-Reinforced Mortar Overlay for Strengthening Masonry Walls by Prism Tests
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Chromatic Adaptation Based Color Correction Technology for Underwater Images

1
School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang 110168, China
2
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(18), 6392; https://doi.org/10.3390/app10186392
Submission received: 6 August 2020 / Revised: 28 August 2020 / Accepted: 11 September 2020 / Published: 14 September 2020
(This article belongs to the Section Robotics and Automation)

Abstract

:

Featured Application

This work aims for recovering realistic colors of underwater images, which is very important for the detection and classification of underwater targets.

Abstract

Recovering correct or at least realistic colors of underwater scenes is a challenging issue for image processing due to the unknown imaging conditions including the optical water type, scene location, illumination, and camera settings. With the assumption that the illumination of the scene is uniform, a chromatic adaptation-based color correction technology is proposed in this paper to remove the color cast using a single underwater image without any other information. First, the underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels. Second, the illumination is estimated in a uniform chromatic space based on the white-patch hypothesis. Third, the chromatic adaptation transform is implemented in the device-independent XYZ color space. Qualitative and quantitative evaluations both show that the proposed method outperforms the other test methods in terms of color restoration, especially for the images with severe color cast. The proposed method is simple yet effective and robust, which is helpful in obtaining the in-air images of underwater scenes.

1. Introduction

Underwater imaging is increasingly used in many important applications such as marine biology and archaeology, underwater surveying and mapping [1]. However, captured underwater images are generally degraded by scattering and absorption. The color cast and the contrast loss are the main consequences of underwater imaging degradation processes, and the goal of underwater image processing is to rectify the color cast and enhance visibility [2,3]. Many image formation model based (IFM-based) image restoration methods and IFM-free image enhancement methods have been raised to achieve this goal [1,4]. The restoration methods try to reverse the underwater imaging process according to some priors, such as dark channel prior (DCP) [5,6] and scene depth map [3,7]. The enhancement methods use qualitative subjective criteria to produce more visually pleasing images [4]. Recently, many deep-learning-based methods have been developed for underwater image restoration and enhancement [3,8,9,10,11,12]. However, the deep-learning-based methods are hindered by the lack of large training datasets [12].
Most of the underwater image restoration and enhancement methods try to correct the color cast and enhance the contrast simultaneously. The color correction methods are used casually and there is no specialized comprehensive comparison between different color correction methods on underwater images. This paper focuses on the color correction of underwater images with the assumption that sunlight is the only light source. Color cast is generated due to the variation of light attenuation, which is caused by absorption and scattering, in different wavelengths [9]. To be exact, during the light propagation in the water medium, color cast is formed in two processes. One is the light travels downward before reaching the scene and the other is the light travels to the camera after reflected by the scene [7,13]. Since the object of interest called foreground is usually closer to the camera than the background, the color cast of the foreground formed in the second process is generally little. The color cast formed in the first process is dominant, especially for the images captured in deep water. Without considering the color cast formed in the second process, it looks like that an underwater image is captured with the scene illuminated by an unknown light source. Using a sunlight-like light source instead will make the underwater images look natural.
Chromatic adaptation is a study of change in the photoreceptive sensitivity of the human visual system under various viewing conditions, such as illumination [14]. The chromatic adaptation mechanism tries to eliminate the effect of the illuminant and has been widely applied in many fields [15,16,17,18]. In this paper, a chromatic adaptation based color correction method is proposed for underwater images and a comparison between this method and several conventional methods [18,19,20,21,22] is drawn. Example results obtained by the proposed method can be seen in Figure 1.
The contributions of this paper are as follows. First, a simple yet effective and robust color correction method for underwater images is proposed. Second, a comprehensive comparison between several typical white balance and color correction methods on underwater images is drawn. Third, several key issues to be concerned in the color correction of underwater images are discussed.

2. Related Work

Color correction is also known as white balance in image processing. Many popular existing automatic white balance algorithms are based on low level image features due to their simple concept and surprising performance [23], such as the gray-world (GW) and white-patch (WP) methods. The GW method works under the assumption that, given an image with sufficient color variations, the average of reflectance of a scene is achromatic [19,20]. The WP method, also known as the perfect reflector method, is based on the hypothesis that the brightest pixel in an image corresponds to an object point on a glossy or specular surface [20,24] which reflects the full range of light that it captures [18]. Consequently, the color of the brightest pixel is considered as the color of the light source and the brightest pixel will be assigned as the reference white point for white balance. The dynamic-threshold (DT) method [20] is another simple white balance method. It finds reference white points around the mean chromaticity coordinate and takes the ratios of the maximum luminance value to the mean RGB values of the white points as the channel gains.
Conventional white balance methods and histogram equalization methods are often directly used to correct the color cast for underwater images [2,7,25,26]. Considering the characteristics of underwater imaging, many specialized methods have been raised. To weaken the reddish cast produced by GW, an adaptive GW is developed by merging the global and local averages [27]. Two variants of shades of grey (SG) method [28,29] are also adopted to efficiently implement the color correction. Henke et al. [30] uses a binary depth map obtained by applying DCP [5] to GB channels to estimate multiple channel gains for different regions in underwater image. Ancuti et al. [31,32] blends the raw image with its color transfer image according to a weight map reflecting the desired level of correction. Li et al. [33] developed a learning-based weakly supervised color transfer model. Liu et al. [34] corrects the regional or full-extent color deviation via frequency-based color-tone estimation. Liu et al. [35] apply local surface reflectance statistical prior to the Retinex image formation model. A statistical-based (SB) method [21,36] is applied to compress the outliners as well as expand the mid-range, which has the effect of image enhancement due to the expanded dynamic range.
Most of these methods process RGB color channels separately, which is prone to introduce artifacts especially for underwater images due to the uneven attenuations in the three channels. Implementing color correction in color space other than RGB space is a good solution to this problem. Bianco et al. [37] performs the GW hypothesis in Ruderman opponent color space of Lαβ to correct the color of underwater images. Emberton et al. [38] proposed a chromatic adaptation based water-type dependent white balance (WTDWB) method which divides underwater images into three categories including blue-dominated, turquoise and green-dominated images. The illuminations of the turquoise and the green-dominated images are respectively estimated by applying WP and GW on the nonlinear RGB images. For blue-dominated images, white balance is not applied to inhibit the introduction of artifacts [38].

3. Method

The proposed method is based on the chromatic adaptation theory and the WP hypothesis. The brightest p percent of pixels are determined based on the luminance values. The average chromaticity coordinate of these brightest pixels is assigned as the color of the reference white point. To get an accurate average chromaticity, the chromaticity coordinates are calculated in a uniform chromatic space of CIE 1960 UCS space. Von Kries theory [39] is then applied to implement the color correction in CIE 1931 XYZ space. The flow of the proposed method is shown in Figure 2 and the details are as follows.

3.1. Chromatic Adaptation Theory

Based on the Von Kries theory [39], the tristimulus values [Xs Ys Zs]T under the source illumination could be mapped to the tristimulus values [Xd Yd Zd]T under the destination illumination as follows:
[ X d Y d Z d ] = M 1 [ α 0 0 0 β 0 0 0 γ ] M [ X s Y s Z s ]
where M is a 3 × 3 matrix, which transforms the tristimulus values [X Y Z]T in the CIE 1931 XYZ space to the LMS cone-like space. Several transforms have been derived from different data sets and different criteria [16]. The Bradford transform [40,41] is adopted in this paper and the matrix M is:
M = [ 0.8951 0.2664 0.1614 0.7502 1.7135 0.0367 0.0389 0.0685 1.0296 ]
In Equation (1), α, β, and γ are defined as:
α = L d w L s w ,   β = M d w M s w ,   γ = S d w S s w
where [Ldw Mdw Sdw] and [Lsw Msw Ssw] are the two cone signals corresponding to the destination white points [Xdw Ydw Zdw] and the source white point [Xsw Ysw Zsw], satisfying:
[ L d w M d w S d w ] = M [ X d w Y d w Z d w ] ,   [ L s w M s w S s w ] = Μ [ X s w Y s w Z s w ]

3.2. Transform from sRGB to XYZ

As described above, if the XYZ values of each pixel in an image and the XYZ values of the white point under the scene illumination are known, the XYZ values of this image under a known canonical illuminant could be obtained. For different RGB color spaces, such as sRGB, Adobe RGB and Prophoto RGB, the transforms from the RGB space to the XYZ space are different due to different reference primaries and illuminations [42]. Given an RGB image, it is difficult to judge the RGB color space in which the image is encoded since the color space information recorded in the exchangeable image file format header can be changed or lost after post-processing by using image editing software [43]. A learning-based method can be used to identify the color space for an arbitrary RGB image in the subsequent research. Here, we simply assume that the images are encoded in sRGB color space since sRGB is representative of the majority of devices on which color is and will be viewed [15]. For an image encoded in the sRGB color space, the CIE 1931 XYZ values can be computed from the RGB values in three steps [44]. First, the digital code values [Rnbit Gnbit Bnbit] are converted to the nominal range [0, 1] following Equation (5). Since the same operation will be performed on the three channels, the notation V is used to represent any of the three channels.
V s R G B = V n bit 2 n 1
where n represents the number of bits per channel, for example, n = 8 for 24-bit RGB images. Second, V′sRGB is linearized, known as de-gamma correction, as follows:
{ V s R G B = V s R G B 12.92 if   V s R G B 0.03928 V s R G B = ( V s R G B + 0.055 1.055 ) 2.4 else
Third, the linear RGB values [RsRGB GsRGB BsRGB] are converted to XYZ values using the following derived relationship:
[ X Y Z ] = [ 0.4127 0.3586 0.1808 0.2132 0.7172 0.0724 0.0195 0.1197 0.9517 ] [ R s R G B G s R G B B s R G B ]

3.3. Color Correction

To estimate the white point under the scene illumination, the XYZ values of the image are first converted to the uniform chromatic space CIE 1960 UCS by Equation (8) [45]. The average chromaticity coordinate of the brightest p percent of pixels, evaluated by the Y values representing the brightness, is computed and denoted by ( u ¯ , v ¯ ). It is taken as the chromaticity coordinate of the white point under the scene illumination in this paper and converted back to the XYZ space by Equation (9) getting [Xsw Ysw Zsw].
The CIE standard illuminant of D65 is taken as the destination illumination to keep consistent with the sRGB color space. That means [Xdw Ydw Zdw] = [0.9505 1.0 1.0888]. Thus, the XYZ values of the restored image, i.e., [Xd Yd Zd], can be computed by Equation (1), and the RGB values can be obtained based on the reverse relationship of Equations (5)–(7).
{ u = 4 X X + 15 Y + 3 Z v = 6 Y X + 15 Y + 3 Z
{ x = 3 u ¯ 2 u ¯ 8 v ¯ + 4 y = v ¯ u ¯ 4 v ¯ + 2 ,   { X s w = x Y s w y Y s w = 1 Z s w = ( 1 x y ) Y s w y

4. Results

Each of the compared method will be described briefly in turn. In the GW method, the ratios of half the RGB values of the white point under the canonical illumination to the mean RGB values of the whole image are taken as the channel gains. The WP method takes the ratios of the RGB values of the white point under the canonical illumination to the mean RGB values of the brightest 10% pixels evaluated by the sum of the RGB values as the channel gains. The SB method was coded exactly following the paper [21] with μ fixed as 2.3. We reproduced the color correction method proposed by Bianco et al. [37] without implementing the contrast improvement. In our method, p is set to 10.
Evaluating a restored image by comparing it with the ground-truth is hard to achieve since it is difficult if not impossible to obtain medium-free in situ images [46]. The restored images are usually evaluated subjectively or objectively based on color cards or objectively based on non-reference image quality metrics. However, the reliability of the existing non-reference image quality metrics is in doubt since they always produce results that are inconsistent with visual observation [1,8,46]. Therefore, only subjective assessment and color-card-based objective assessment, for images from a Haze-line dataset, are adopted in this paper.

4.1. Evaluation on UIEB Dataset

The UIEB dataset [8] is used to test the proposed method first. In the UIEB dataset, the reference images are manually selected by 50 volunteers from the potential reference images generated by 12 image enhancement methods [8]. Following [8], the proposed method is also test on the five categories of images: greenish and bluish images, downward looking images, forward looking images, low backscatter scenes, and high backscatter scenes. The results of different methods and the corresponding references are shown in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. Zoom in the images to see more details.
It can be seen from Figure 4 and Figure 5 that the proposed method does not show obvious advantages on the images with little color cast, which is consistent with expectations. From Figure 6, we can see that the proposed method could correct the color cast well though the restored image still does not look very good due to the unimproved contrast. The restored images of the total 890 images with reference images in the UIEB dataset are compared between the test methods. We found the GW, WP and SB [21] methods often cause artifacts, especially in the dark regions of the images with severe color cast, as shown in Figure 8. Bianco’s method [37] often over-grays the images making the color compressed or lost, as shown in Figure 4 and Figure 5, and Figure 9. The proposed method has consistently outperformed the other methods on the images with severe color cast. Many restored images obtained by the proposed method look even more natural than the reference images, as shown in Figure 10.

4.2. Evaluation on Haze-line Dataset

The results on Haze-line dataset [13] are shown in Figure 11. The result of haze-line method [13] was generated using code released by the authors. For evaluation, the mean angular error between the six grayscale patches of each chart and a pure gray color was computed as done in Reference [13]:
ψ ¯ = 1 6 x i cos 1 [ I ( x ) ( 1 , 1 , 1 ) I ( x ) 3 ] .
Lower angular error indicates a more accurate color restoration. Many images in the Haze-line dataset contain multiple color charts at different distances from the camera. The charts are ordered by its distance to the camera and chart #1 is closest to the camera. The angular errors for each chart, image, and method are shown in Table 1. The lowest average angular error is got by our method, as evident from Table 1.

4.3. Evaluation on Sea-Thru Dataset

Since Sea-thru dataset [7] only provides unprocessed sensor data stored in RAW photo formats, the software of Imaging Edge Desktop and Adobe Photoshop Lightroom was used to convert the ARW files to TIF and JPG files without any post-processing. No reference images are provided in the Sea-thru dataset [7], so only the results obtained by the test methods are given in Figure 12. It can be seen that the proposed method could remove the color cast well and it generates more colorful and realistic images than Bianco’s method [37].

5. Discussion

5.1. Artifacts Caused by Over-Enhancement

Conventional color correction methods which calculate a gain for each channel in RGB space, such as GW, WP, SB [21], and DT [20] methods, prone to over-enhance the severely attenuated channel resulting in obvious artifacts. The pixels with nonzero values in the severely attenuated channel tend to be over-enlarged, as shown in Figure 13 and Figure 14. The methods which implement the color correction in other color spaces, such as Bianco’s method [37], Emberton’s method [38], and the proposed method do not produce obvious artifacts. Therefore, it is a good solution to avoiding artifacts.

5.2. Linearization of RGB Images

The linear RGB images are generally non-linearized, known as gamma correction, in the image signal processing pipeline embedded in a camera’s hardware [47] to adapt the output characteristics of monitors. It is very important and necessary to correct the non-linearity of RGB images, known as de-gamma correction, to ensure the color correction is implemented in a linear RGB coordinate system [37]. For verification, the linearization process is removed from the flow of the proposed method, and several typical results are compared in Figure 15. We can see that artifacts are also produced in some images without linearization and the performance of color correction becomes slightly worse. However, the artifacts are slight compared with the conventional color correction methods, such as GW, WP, and SB [21].

5.3. Underwater Image Enhancement

To get in-air images of underwater scenes by image enhancement, color correction methods should be combined with contrast improvement methods. Figure 16 shows several enhanced images obtained by applying a simple contrast improvement method named contrast limited adaptive histogram equalization (CLAHE) [48] after the proposed color correction method. It can be seen that the color cast and contrast are both significantly improved. However, most of the existing contrast improvement methods are designed to deal with a special problem and some of them will also change the image color. It is still important to look for a more effective and robust underwater image enhancement method.

5.4. Evaluation of Restored Images

Since the inconsistency between the common non-reference image quality metrics and the visual observation has been observed [1,8,46], color-card-based evaluation methods are more reliable. However, it is not accurate enough only using the grayscale patches [7,13]. Taking advantage of the color patches on color charts could evaluate the color accuracy better [11].

6. Conclusions

A simple chromatic adaptation-based color correction method for underwater images is proposed in this paper. The underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels, ensuring the color correction operation is performed in a linear space. The illumination is estimated in the uniform chromatic space of CIE 1960 UCS using the brightest 10% pixels based on the WP hypothesis. The chromatic adaptation transform is implemented in the device-independent color space of CIE 1931 XYZ. Experiments show that the proposed method is quite robust and could produce visually pleasing results while the other methods often introduce artifacts or over-enhancement. How to combine the color correction method with contrast improvement methods or model-based restoration methods to get the in-air images of underwater scenes will be the focus of follow-up work.

Author Contributions

Conceptualization, Y.L., Y.T. and X.Y.; methodology, X.Y. and H.F.; software, C.Y.; validation, C.Y. and Z.Z.; investigation, X.Y.; resources, X.Y. and Z.Z.; writing—original draft preparation, X.Y.; writing—review and editing, X.Y., H.F. and W.L.; visualization, D.W.; funding acquisition, Y.L., Y.T. and H.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (61991413, 61973224), the Key Research and Development Program of Liaoning (2019JH2/10100014), the Natural Science Foundation of Liaoning Province (2019-ZD-0673, 2019-KF-01-15), the Key Laboratory of Road Construction Technology and Equipment (Chang’an University), MOE (300102259506).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.; Song, W.; Fortino, G.; Qi, L.-Z.; Zhang, W.; Liotta, A. An Experimental-Based Review of Image Enhancement and Image Restoration Methods for Underwater Imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  2. Lu, H.; Li, Y.; Serikawa, S. Underwater image enhancement using guided trigonometric bilateral filter and fast automatic color correction. In Proceedings of the 20th IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 3412–3416. [Google Scholar]
  3. Wang, K.; Hu, Y.; Chen, J.; Wu, X.; Zhao, X.; Li, Y. Underwater image restoration based on a parallel convolutional neural network. Remote Sens. 2019, 11, 1591. [Google Scholar] [CrossRef] [Green Version]
  4. Corchs, S.; Schettini, R. Underwater image processing: State of the art of restoration and image enhancement methods. EURASIP J. Adv. Sign. Process. 2010, 2010, 746052. [Google Scholar] [CrossRef] [Green Version]
  5. He, K.; Sun, J.; Ttang, X. Single image haze removal using dark channel prior. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar]
  6. Drews, P., Jr.; Do Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the 14th IEEE International Conference on Computer Vision Workshops, Sydney, Australia, 1–8 December 2013; pp. 825–830. [Google Scholar]
  7. Akkaynak, D.; Treibitz, T. Sea-THRU: A method for removing water from underwater images. In Proceedings of the 32nd IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1682–1691. [Google Scholar]
  8. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset and beyond. IEEE Trans. Image Process. 2020, 29, 4376–4389. [Google Scholar] [CrossRef] [Green Version]
  9. Hou, M.; Liu, R.; Fan, X.; Luo, Z. Joint Residual Learning for Underwater Image Enhancement. In Proceedings of the 25th IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018; pp. 4043–4047. [Google Scholar]
  10. Lu, J.; Li, N.; Zhang, S.; Yu, Z.; Zheng, H.; Zheng, B. Multi-scale adversarial network for underwater image restoration. Opt. Laser Technol. 2019, 110, 105–113. [Google Scholar] [CrossRef]
  11. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. 2018, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
  12. Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
  13. Berman, D.; Levy, D.; Avidan, S.; Treibitz, T. Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [Green Version]
  14. Hirakawa, K.; Parks, T.W. Chromatic adaptation and white-balance problem. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 11–14 September 2005; pp. 984–987. [Google Scholar]
  15. Gasparini, F.; Schettini, R. Color balancing of digital photos using simple image statistics. Pattern Recognit. 2004, 37, 1201–1217. [Google Scholar] [CrossRef]
  16. Kerouh, F.; Ziou, D.; Lahmar, K.N. Content-based computational chromatic adaptation. Pattern Anal. Appl. 2018, 21, 1109–1120. [Google Scholar] [CrossRef]
  17. Wilkie, A.; Weidlich, A. A Robust Illumination Estimate for Chromatic Adaptation in Rendered Images. Comput. Graph. Forum 2009, 28, 1101–1109. [Google Scholar] [CrossRef]
  18. Gijsenij, A.; Gevers, T.; Van De Weijer, J. Computational color constancy: Survey and experiments. IEEE Trans. Image Process. 2011, 20, 2475–2489. [Google Scholar] [CrossRef] [PubMed]
  19. Buchsbaum, G. Spatial processor model for object color perception. J. Franklin Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
  20. Weng, C.-C.; Chen, H.; Fuh, C.-S. A novel automatic white balance method for digital still cameras. In Proceedings of the IEEE International Symposium on Circuits and Systems, Kobe, Japan, 23–26 May 2005; pp. 3801–3804. [Google Scholar]
  21. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.-P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 4572–4576. [Google Scholar]
  22. Funt, B.; Barnard, K.; Martin, L. Is machine colour constancy good enough? In Proceedings of the 5th European Conference on Computer Vision, Freiburg, Germany, 2–6 June 1998; pp. 445–459. [Google Scholar]
  23. Van de Weijer, J.; Gevers, T.; Gijsenij, A. Edge-based color constancy. IEEE Trans. Image Process. 2007, 16, 2207–2214. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Land, E.H. The Retinex Theory of Color Vision. Sci. Am. 1977, 237, 108–128. [Google Scholar] [CrossRef]
  25. Li, C.; Guo, J. Underwater image enhancement by dehazing and color correction. J. Electron. Imaging 2015, 24, 033023. [Google Scholar] [CrossRef]
  26. Chambah, M.; Semani, D.; Renouf, A.; Courtellemont, P.; Rizzi, A. Underwater color constancy: Enhancement of automatic live fish recognition. In Proceedings of the Color Imaging IX: Processing, Hardcopy, and Applications, San Jose, CA, USA, 20–22 January 2004; pp. 157–168. [Google Scholar]
  27. Wong, S.-L.; Paramesran, R.; Yoshida, I.; Taguchi, A. An Integrated Method to Remove Color Cast and Contrast Enhancement for Underwater Image. IEICE Trans. Fund. Electron. Commun. Comput. Sci. 2019, 1524–1532. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, Y.; Ding, X.; Wang, R.; Zhang, J.; Fu, X. Fusion-based underwater image enhancement by wavelet decomposition. In Proceedings of the IEEE International Conference on Industrial Technology, Toronto, ON, Canada, 23–25 March 2017; pp. 1013–1018. [Google Scholar]
  29. Ding, X.; Wang, Y.; Zhang, J.; Fu, X. Underwater image dehaze using scene depth estimation with adaptive color correction. Proceedings of OCEANS, Aberdeen, UK, 19–22 June 2017; pp. 1–5. [Google Scholar]
  30. Henke, B.; Vahl, M.; Zhou, Z. Removing color cast of underwater images through non-constant color constancy hypothesis. In Proceedings of the 8th International Symposium on Image and Signal Processing and Analysis, Trieste, Italy, 4–6 September 2013; pp. 20–24. [Google Scholar]
  31. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Garcia, R. A semi-global color correction for underwater image restoration. In Proceedings of the 44th International Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 30 July–3 August 2017; p. a66. [Google Scholar]
  32. Ancuti, C.O.; Ancuti, C.; Vleeschouwer, C.D.; Garcia, R. Locally Adaptive Color Correction for Underwater Image Dehazing and Matching. In Proceedings of the 30th Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 997–1005. [Google Scholar]
  33. Li, C.; Guo, J.; Guo, C. Emerging from Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer. IEEE Signal. Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef] [Green Version]
  34. Liu, Y.; Xu, H.; Shang, D.; Li, C.; Quan, X. An underwater image enhancement method for different illumination conditions based on color tone correction and fusion-based descattering. Sensors 2019, 19, 5567. [Google Scholar] [CrossRef] [Green Version]
  35. Liu, H.; Chau, L.-P. Underwater image color correction based on surface reflectance statistics. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Hong Kong, China, 16–19 December 2015; pp. 996–999. [Google Scholar]
  36. Zhang, W.; Li, G.; Ying, Z. A new underwater image enhancing method via color correction and illumination adjustment. Proceedings of IEEE Visual Communications and Images Processing, St Petersburg, FL, USA, 10–13 December 2017. [Google Scholar]
  37. Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L. A new color correction method for underwater imaging. In Proceedings of the 2015 Underwater 3D Recording and Modelling, Piano di Sorrento, Italy, 16–17 April 2015; pp. 25–32. [Google Scholar]
  38. Emberton, S.; Chittka, L.; Cavallaro, A. Underwater image and video dehazing with pure haze region segmentation. Comput. Vision Image Underst. 2018, 168, 145–156. [Google Scholar] [CrossRef]
  39. Von Kries, J. Chromatic adaptation. In Sources of Color Science, 1st ed.; MacAdam, D.L., Ed.; MIT Press: Cambridge, MA, USA, 1970; pp. 120–127. [Google Scholar]
  40. Lam, K.M. Metamerism and Colour Constancy; University of Bradford: Bradford, UK, 1985. [Google Scholar]
  41. Hunt, R.W.G. Reversing the Bradford chromatic adaptation transform. Color. Res. Appl. 1997, 22, 355–356. [Google Scholar] [CrossRef]
  42. Lindbloom, B.J. RGB to XYZ. Available online: http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html (accessed on 22 April 2020).
  43. Li, H.; Kot, A.C.; Li, L. Color space identification from single images. In Proceedings of the IEEE International Circuit and System Symposium, Montreal, QC, Canada, 22–25 May 2016; pp. 1774–1777. [Google Scholar]
  44. Anderson, M.; Motta, R.; Chandrasekar, S.; Stokes, M. Proposal for a standard default color space for the Internet—sRGB. In Proceedings of the 4th IS and T/SID Color Imaging Conference: Color Science, Systems, and Applications, Scottsdale, AR, USA, 19–22 November 1996; pp. 238–246. [Google Scholar]
  45. Judd, D.B.; Yonemura, G.T. CIE 1960 UCS diagram and the Müller theory of color vision. J. Res. Natl. Bur. Stand. Sect. A Phys. Chem. 1970, 74A, 23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Berman, D.; Treibitz, T.; Avidan, S. Diving into haze-lines: Color restoration of underwater images. In Proceedings of the 28th British Machine Vision Conference, London, UK, 4–7 September 2017. [Google Scholar]
  47. Karaimer, H.C.; Brown, M.S. A software platform for manipulating the camera imaging pipeline. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 429–444. [Google Scholar]
  48. Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Color cast and restoration of underwater images: (a) and (c) show the raw images, (b) and (d) present the restored results obtained by the proposed method. Raw images are provided with the UIEB dataset [8].
Figure 1. Color cast and restoration of underwater images: (a) and (c) show the raw images, (b) and (d) present the restored results obtained by the proposed method. Raw images are provided with the UIEB dataset [8].
Applsci 10 06392 g001
Figure 2. Color correction flow.
Figure 2. Color correction flow.
Applsci 10 06392 g002
Figure 3. Comparisons on bluish and greenish underwater images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 3. Comparisons on bluish and greenish underwater images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g003
Figure 4. Comparisons on downward looking images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 4. Comparisons on downward looking images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g004
Figure 5. Comparisons on forward looking images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 5. Comparisons on forward looking images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g005
Figure 6. Comparisons on low backscatter scenes. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 6. Comparisons on low backscatter scenes. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g006
Figure 7. Comparisons on high backscatter scenes. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 7. Comparisons on high backscatter scenes. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g007
Figure 8. Comparisons on images with severe color cast. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 8. Comparisons on images with severe color cast. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g008
Figure 9. Comparisons on images with color prior. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 9. Comparisons on images with color prior. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g009
Figure 10. Sample images on which the proposed method produces more natural results than reference images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Figure 10. Sample images on which the proposed method produces more natural results than reference images. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and reference images.
Applsci 10 06392 g010
Figure 11. Comparisons on images in Haze-line dataset [13]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and haze-lines method [13].
Figure 11. Comparisons on images in Haze-line dataset [13]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and haze-lines method [13].
Applsci 10 06392 g011
Figure 12. Comparisons on images in Sea-thru dataset [7]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37] and the proposed method.
Figure 12. Comparisons on images in Sea-thru dataset [7]. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37] and the proposed method.
Applsci 10 06392 g012
Figure 13. Reddish artifacts produced by conventional color correction method. From left to right are raw underwater images, R channels, G channels, B channels, binarized R channels and the results of gray-world (GW) method.
Figure 13. Reddish artifacts produced by conventional color correction method. From left to right are raw underwater images, R channels, G channels, B channels, binarized R channels and the results of gray-world (GW) method.
Applsci 10 06392 g013
Figure 14. Blueish artifacts produced by conventional color correction method. From left to right are raw underwater images, R channels, G channels, B channels, binarized B channels and the results of gray-world (GW) method.
Figure 14. Blueish artifacts produced by conventional color correction method. From left to right are raw underwater images, R channels, G channels, B channels, binarized B channels and the results of gray-world (GW) method.
Applsci 10 06392 g014
Figure 15. Comparisons on results obtained by the proposed method with and without linearization: (a) Raw underwater images, (b) Results with linearization, (c) Results without linearization.
Figure 15. Comparisons on results obtained by the proposed method with and without linearization: (a) Raw underwater images, (b) Results with linearization, (c) Results without linearization.
Applsci 10 06392 g015
Figure 16. Enhanced underwater images obtained by combining the proposed color correction method with CLAHE: (a) Raw underwater images, (b) Results of the proposed color correction method, (c) Results by applying CLAHE to images shown in (b), (d) Reference images. Raw and reference images are provided with the UIEB dataset [8].
Figure 16. Enhanced underwater images obtained by combining the proposed color correction method with CLAHE: (a) Raw underwater images, (b) Results of the proposed color correction method, (c) Results by applying CLAHE to images shown in (b), (d) Reference images. Raw and reference images are provided with the UIEB dataset [8].
Applsci 10 06392 g016
Table 1. Angular error (unit: degrees) in RGB space between the gray-scale patches and a pure gray color, for each chart in each image, and all methods. Lower is better. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and haze-lines method [13].
Table 1. Angular error (unit: degrees) in RGB space between the gray-scale patches and a pure gray color, for each chart in each image, and all methods. Lower is better. From left to right are raw underwater images, the results of gray-world (GW) method, white-patch (WP) method, statistical-based (SB) method [21], Bianco’s (GWLαβ) method [37], proposed method and haze-lines method [13].
Image No.RawGWWPSBGWLαβOursHaze-Lines
3008#131.4126.8727.519.857.424.672.55
3008#235.2735.2635.3010.938.045.103.43
3008#335.2735.2635.3111.418.045.143.07
3008#435.2835.2735.3310.538.535.600.79
3008#535.2835.2735.3411.748.365.560.71
3204#130.4127.2026.679.105.435.658.44
3204#228.6121.0923.259.245.045.3012.46
3204#335.2935.2735.369.746.517.244.22
3204#435.3035.2735.3711.586.257.307.65
3204#535.3135.2835.409.486.707.553.73
4485#133.4931.1529.448.676.562.545.84
4485#235.5835.2735.453.668.504.011.90
4485#335.5935.2735.439.757.424.062.12
4491#135.1929.7930.989.547.583.277.67
4491#235.4935.2835.395.688.123.245.06
4491#335.6035.3235.479.748.234.255.19
5450#129.6827.3023.9913.585.904.276.12
5450#228.7221.9022.207.694.723.527.30
5450#332.6531.2829.4410.165.163.905.46
5450#434.2431.4531.3410.765.454.604.49
5450#535.5835.3735.487.426.896.234.78
5469#132.4130.5029.479.564.743.376.95
5469#235.5635.3335.488.035.494.813.32
5469#335.4335.2835.3711.164.683.677.50
5469#435.5935.3335.526.975.515.233.06
5478#133.2530.3029.439.543.902.408.35
5478#233.2730.3029.439.583.892.418.33
5478#335.5535.2935.409.264.403.774.68
5478#435.7435.3435.577.004.975.234.95
Average34.0032.3832.259.366.294.625.18

Share and Cite

MDPI and ACS Style

Yang, X.; Yin, C.; Zhang, Z.; Li, Y.; Liang, W.; Wang, D.; Tang, Y.; Fan, H. Robust Chromatic Adaptation Based Color Correction Technology for Underwater Images. Appl. Sci. 2020, 10, 6392. https://doi.org/10.3390/app10186392

AMA Style

Yang X, Yin C, Zhang Z, Li Y, Liang W, Wang D, Tang Y, Fan H. Robust Chromatic Adaptation Based Color Correction Technology for Underwater Images. Applied Sciences. 2020; 10(18):6392. https://doi.org/10.3390/app10186392

Chicago/Turabian Style

Yang, Xieliu, Chenyu Yin, Ziyu Zhang, Yupeng Li, Wenfeng Liang, Dan Wang, Yandong Tang, and Huijie Fan. 2020. "Robust Chromatic Adaptation Based Color Correction Technology for Underwater Images" Applied Sciences 10, no. 18: 6392. https://doi.org/10.3390/app10186392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop