Design of a high performance architecture for real-time enhancement of video stream captured in extremely low lighting environment

https://doi.org/10.1016/j.micpro.2009.02.006Get rights and content

Abstract

A high performance digital architecture for the implementation of a non-linear image enhancement technique is proposed in this paper. The image enhancement is based on a luminance dependent non-linear enhancement algorithm which achieves simultaneous dynamic range compression, colour consistency and lightness rendition. The algorithm provides better colour fidelity, enhances less noise, prevents the unwanted luminance drop at the uniform luminance areas, keeps the ‘bright’ background unaffected, and enhances the ‘dark’ objects in ‘bright’ background. The algorithm contains a large number of complex computations and thus it requires specialized hardware implementation for real-time applications. Systolic, pipelined and parallel design techniques are utilized effectively in the proposed FPGA-based architectural design to achieve real-time performance. Estimation techniques are also utilized in the hardware algorithmic design to achieve faster, simpler and more efficient architecture. The video enhancement system is implemented using Xilinx’s multimedia development board that contains a VirtexII-X2000 FPGA and it is capable of processing approximately 67 Mega-pixels (Mpixels) per second.

Introduction

Image enhancement algorithms are developed to improve the visual appearance of an image by increasing the contrast, adjusting brightness, and enhancing visually important features. Image enhancement is a very important pre-processing stage in face detection and face recognition applications especially when the environment is very dark. Image enhancement is also very important to applications in security and surveillance domains where limitations in dynamic range and the lack of lighting sources prevent fine details of the scene to be captured. Many conventional image enhancement techniques have been presented such as automatic gain/offset, non-linear gamma correction, non-linear transformation of pixel intensity techniques, histogram equalization technique, homomorphic filtering technique, etc. Multi-Scale Retinex (MSR) [5] is a popular technique that works well under most lighting conditions; however, some improvements still can be done. Another algorithm called Luminance-Dependent Nonlinear Image Enhancement (LDNE) technique which is proposed by Tao and Asari is used to improve the conventional MSR technique in various areas [9]. One of the improved areas is to suppress unwanted noise in enhancement process by summing the convolution results of luminance and different scale Gaussians. Another improved area is that the colour saturation adjustment for producing more natural colours is incorporated in the algorithm which works well for different types of pictures or video images by either compensating or counteracting the original colour saturation in each band.

Computation of the LDNE algorithm requires a large number of complex operations on every pixel in an image. Although processing speed in general purpose computers has increased significantly in recent years, the technology has approached the upper limit. These systems have only sufficient processing power to support real-time enhancement of small size video frames. For larger image size, real-time processing in general purpose computers are not possible because these systems can only process data sequentially with significant memory overhead. Alternatively, Digital Signal Processors (DSPs) can be used to support real-time image processing applications. The use of DSPs for image enhancing problem provides some improvement compared to software means in general purpose computers by employing ‘limited’ parallelization in the core processor and utilizing the optimized DSP library for some complex operations [4]. Still, the approach using DSPs does not take full advantage of the inherent parallelism in the image enhancement algorithm. Hence, enhancing 25–30 large size video frames (such as 1024 × 1024 frames) per second in DSPs is still not possible. The intensive computations in this image enhancement technique require massive parallel processing capability to support real-time processing of a large size video stream.

Field Programmable Gate Arrays (FPGAs) provide an attractive solution to this problem because of their high density, high performance and complete configurability to support specific applications. An FPGA offers a combination of the flexibility of a general purpose computer and hardware-based high speed of Application Specific Integrated Circuits (ASICs). An architecture design for FPGA technology can exploit fully the data and I/O parallelism in any particular image processing application. Additionally, DSP-supported FPGAs readily available in the market also contain substantial number of technology-optimized and built-in components that are frequently used in DSP applications such as multipliers, Multiply-and-Accumulator (MAC) units, etc. In recent years, many researchers have utilized the advantages of FPGAs to implement computationally intensive algorithms for real-time applications [1], [2], [3], [7]. In this paper, we present an efficient, FPGA-based architecture design for the LDNE algorithm to support real-time processing of large-size video streams.

Section snippets

The image enhancement algorithm

The LDNE algorithm used in this paper is composed of independent steps for dynamic range compression and colour balancing/justification. The first step in the algorithm is the non-linear transformation of the input image from RGB space to luminance space by:I(x,y)=(R(x,y))2+(G(x,y))2+(B(x,y))2R(x,y)+G(x,y)+B(x,y)where I(x, y) is the luminance of the pixel at location (x, y); R(x, y), G(x, y) and B(x, y) are the values of the Red, Green and Blue colour band of the pixel.

Once the luminance of a pixel

High performance hardware architecture design

As in many other image processing applications, real-time image enhancement application presented in this paper provides a high level of data parallelism where computations involving each pixel in the image are identical. It is essential to take advantage of the parallelism in the application to achieve maximum throughput. The architecture for LDNE image enhancement algorithm is designed to exploit the inherent parallelism in the technique by partitioning computations to various modules in a

Simulation and performance analysis

In this section, we discuss the results of the hardware implementation on a FPGA development board which demonstrates that the system is able to enhance video stream in real-time and it produces an output video stream rate equal to the input video stream rate. Since we employ an estimation method in the computational procedure, processed image obtained from the hardware architecture is used to compare with simulation result obtained from software means for analysis purpose. Performance

Conclusions

Design of a FPGA based architecture for the implementation of a non-linear image enhancement technique for real-time video applications has been presented in this paper. The enhancement technique which is based on the luminance of the pixel considers information from neighbouring pixels to enhance images taken under extremely dark or non-uniform lighting conditions. The hardware design utilizes estimation methods to calculate complex operations such as logarithm and inverse logarithm fast and

References (10)

  • R. Gottumukkal et al.

    Multi-lane architecture for eigenface based real-time face recognition

    Journal of Microprocessors and Microsystems

    (2006)
  • D.G. Bariamis, D.K. Iakovidis, D.E. Maroulis, S.A. Karkanis, An FPGA-based architectures for real-time image feature...
  • J. Batlle et al.

    A new FPGA/DSP-based parallel architecture for real-time image processing

    Real-Time Imaging

    (2002)
  • G.D. Hines, Z. Rahman, D.J. Jobson, G.A. Woodell, DSP implementation of the retinex image enhancement algorithm, in:...
  • D.J. Jobson et al.

    A multiscale retinex for bridging the gap between colour images and the human observation of scenes

    IEEE Transactions on Image Processing

    (1997)
There are more references available in the full text version of this article.

Cited by (0)

View full text