Next Article in Journal
Photography by Cameras Integrated in Smartphones as a Tool for Analytical Chemistry Represented by an Butyrylcholinesterase Activity Assay
Next Article in Special Issue
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Previous Article in Journal
Socially Aware Heterogeneous Wireless Networks
Previous Article in Special Issue
Multiple Leader Candidate and Competitive Position Allocation for Robust Formation against Member Robot Faults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot

College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(6), 13725-13751; https://doi.org/10.3390/s150613725
Submission received: 24 April 2015 / Revised: 27 May 2015 / Accepted: 5 June 2015 / Published: 11 June 2015
(This article belongs to the Special Issue Sensors for Robots)

Abstract

:
In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation.

1. Introduction

NDT is very important to guarantee the safe operation of lots of industry facilities, such as wind turbine towers and oil storage tanks. In these industrial situations, automatic weld line tracking can navigate moving wall climbing robots and improve the testing performance significantly. From the perspective of sensors, structured light is a typical vision sensor, which possesses the merits of simplicity, noncontact, strong anti-interference abilities and others. It has been widely used in many fields, such as robot navigation [1], automatic welding [2], industry inspection [3,4], 3D measurement [5,6,7] and quality control [8,9]. Unlike weld seam detection in a welding process, weld line tracking for NDT is usually performed outdoors; therefore the sensor is challenged by noise and illumination intensity variations.
The structured light actually has two subclasses: multi-pattern (coded-pattern) and fixed-pattern. The multi-pattern approaches are accurate in the controlled environment of stationary scenes due to their high signal-to-noise ratio (SNR), whereas fixed-pattern methods are robust to ambient lighting changes, but they have less accuracy [10]. Generally, methods for 3D measurement and reconstruction of an object use coded-pattern structured light with high precision and fast speed. However, these methods need to be operated in relatively ideal scenes (indoors). In outdoor scenes, the coded-pattern may fail due to ambient noises and reflections from the object surface [11,12,13]. Thus, the methods based on fixed-pattern (that is, typical liner structured light) are more appropriate for vision navigation of robots in outdoor scenes. The structured light with single laser stripe is the most widely-used approach in industrial environments. For instance, Usamentiaga and Molleda et al. [14] used single laser stripes to inspect and control the flatness of steel strips in real-time. For removing the effects of vibrations, two and multiple laser stripes are also used in uncontrolled environments [5,15].
When using a structured light sensor, robust stripe segmentation is a decisive step that determines the accuracy of tracking and localization of the weld line to be considered, because it describes the local 3D details of the weld line surface. Most research focuses on the peak detection which is used to construct the skeleton of the laser stripe afterwards. The most common methods are the search for the maximum intensity or the detection of the center of the stripe in the image with the use of Linear Approximation, Center of Mass, Gaussian Approximation, Parabolic Estimator, Blais and Rioux Detector. Fisher et al. [16] gave a comparison of several most common approaches ranging from simple methods to complex ones and concluded that all of them displayed performance within the same range. Haug indicated that the center of mass method produces the best results [17]. Strobl et al. [18] presented a stripe segmentation algorithm also based on the center of mass. They detected the stripe edge by using the Sobel operator, then modified the edge points by means of a color look-up table and width limits, both of which need a clean background. Li et al. proposed to use temporal data association to filter out the illumination noise from arc lights or splashes in video sequences [19]. However, it is difficult to extend this approach to outdoor scenes because the illumination variation in outdoor scenes is quite different from arc lights or splashes. Molleda et al. proposed a valid method based on the center of mass [20,21]. They use interferential filters to acquire parts of the image which match the laser wavelength and this reduces greatly noises. The laser stripe is segmented with the use of an improved Split-and-Merge approach from the central points. In their paper, several different approximation functions (linear, quadratic, and Akima splines) are evaluated. Ofner et al. used a line walking algorithm to merge the line gaps, which segment the stripe by looking for maximum values which are higher than a certain threshold in the adjacent row [22]. Forest et al. used a FIR filter approach to detect the peak position of the laser stripe under different noise levels (at different signal-to-noise ratios) [23]. However, the problem of long-duration variation of illumination intensity has remained a challenge to segment stripe approaches.
Most peak detection approaches assume that the distribution of the laser illumination approximates a Gaussian distribution. In these researches, researchers segment laser stripe only under low-level noise conditions according to local information. Thus these approaches are sensitive to the environmental illumination and can only be applied to scenes with high contrast between the laser illumination and background. Because of the above demerits, these methods cannot be directly applied in outdoor and industrial environments with strong illumination and high noise.
Some approaches segment the laser stripe according to global information. In [24], based on the snake model, the spring model improves the segmentation results and shows insensitivity to local noise. The method is robust to the low-quality images caused by laser speckle and illumination variations along the laser line. It is worth noting that it obtains an accurate, complete, robust and smooth centerline of the stripe. In many other studies, most of detection methods concentrate on the detection accuracy of the stripe, often at the expense of robustness and effectiveness.
In this paper, we design a cross-structure light sensor for weld line detection and tracking of a wall climbing robot working in outdoor environments. The laser stripe projected by CSL can reflect the height of weld convexity and detect horizontal and vertical weld lines simultaneously. When the robot moves along a weld line, only a stripe is used to detect the weld line. As the robot is close to a T-intersection of two weld lines, two convex arcs will appear at the strips. The sensor can simultaneously detect two weld lines, which are used to plan the motion path of the robot (make a turn). The problem of the stripe segmentation and centre points localization is formulated through three steps: firstly, to eliminate the effect of illumination, the best monochromatic value space is calculated by the minimum entropy model. Secondly, the minimum entropy deconvolution model is used to acquire the enhancement of the laser stripe. Thirdly, based on the results of the two steps above, all centre points of the stripe are localized by Steger’s method [25]. The proposed algorithm segments the stripe according to global information and overcomes gaps and strong noises.
In the experimental results, the stripe segmentation method is used in a CSL sensor of wall climbing robot. We quantitatively compare the proposed approach with four other approaches to verify the performance of the segmenting stripe. Besides, 3D measurement results of the weld line and path tracing of the wall climbing robot are shown. The controller of the robot is described in [26] and the weld line localization is described in [27], so the details of both are omitted in this paper.
The remainder of this paper is organized as follows: in Section 2, a wall climbing robot and CSL sensor are introduced. In Section 3, the approaches of segmenting the laser stripe and localizing centre points are presented. In Section 4, the experimental results and discussion are presented. The conclusions of the paper are given in Section 5.

2. Cross Structured Light Sensor

2.1. The Robot Platform

The quality inspection system of the weld line is shown in Figure 1a. It is composed of a wall climbing robot, an ultrasonic NDT device and a CSL sensor. Figure 1b,c illustrate the laser projector and image capturing CCD on a straight weld line and cross weld lines, respectively. Two cross laser planes are projected from the laser projector, forming a convex light stripe around the weld line of the welding surface. The laser projector is fixed on the robot and perpendicular to the welding. Two stripes s1 and s2 are formed by the intersection lines between the weld and two orthogonal laser planes L1 and L2. According to the triangulation measurement method, the 3D information of the weld surface can be calculated by transforming the points of the stripes from the image coordinate system to the global coordinate system.
Figure 1. Illustration of the weld line inspection system composed of a wall climbing robot, an NDT device and a CSL sensor. (a) The system; (b) Detecting a straight weld line; and (c) T-intersection of weld lines.
Figure 1. Illustration of the weld line inspection system composed of a wall climbing robot, an NDT device and a CSL sensor. (a) The system; (b) Detecting a straight weld line; and (c) T-intersection of weld lines.
Sensors 15 13725 g001
In general, when the robot moves along the straight weld line, only a convex arc exists on the stripe s2, as shown in Figure 1b. When the robot is close to a T-intersection of vertical and horizontal weld lines, two convex arcs will appear at the strips, as shown in Figure 1c. The sensor can simultaneously detect the locations of the horizontal and the vertical weld lines, which are used to plan the motion path of the robot.

2.2. Model of CSL Sensor

According to the above principle, Figure 2 describes the model of the CSL sensor. oc-xcyczc denotes the camera coordinate system. ocoi = f denotes the focal length of the camera. oi-xiyi denotes the image coordinate system. In camera coordinate system, two mutually orthogonal laser planes L1 and L2 are defined as:
{ a 1 x c + b 1 y c + c 1 z c + 1 = 0 a 2 x c + b 2 y c + c 2 z c + 1 = 0
where a1a2 + b1b2 + c1c2 = 0.
Figure 2. Model of the CSL sensor.
Figure 2. Model of the CSL sensor.
Sensors 15 13725 g002
There is an arbitrary point P on the stripe s1. Pc = [xcp, ycp, zcp,]T denotes the coordinate of point P in camera coordinate system and Pu = [xup, yup]T denotes the coordinate of point P in the image coordinate system. The pixel location of pu is represented by [uup, vup]T. Assuming the width and height of a pixel are dx(mm) and dy(mm), respectively, in the image coordinate system the transformational relation from millimeter coordinate to pixel coordinate is described as:
[ u u p v u p ] = [ 1 / d x 0 0 1 / d y ] [ x u p y u p ] + [ u 0 v 0 ]
where, the point [u0, v0]T is the intersection coordinates of the optical axis and the image plane, i.e., the principal point of the camera in the image plane. According to the pinhole model, the transformational relation from image coordinate to camera coordinate is described as:
[ u u p v u p 1 ] = [ k x 0 u 0 0 k y v 0 0 0 1 ] [ x c p / z c p y c p / z c p 1 ] = K [ x c p / z c p y c p / z c p 1 ]
where k x = f d x , k y = f d y ; K is the camera’s intrinsic matrix.
Actual lenses do not obey the ideal pinhole model due to lens distortion [28]. When modeling the camera, correction of radial distortions and tangential distortions is often used to improve the measurement precision [29,30]. In Figure 2 point pu is the unobservable distortion free image coordinates, and pd = [xd, yd]T is the corresponding coordinates with distortion correction. pu and pd are decided by the positions of the points in the image plane and described by:
{ x u p = k 1 x d ( x d 2 + y d 2 ) + k 2 x d ( x d 2 + y d 2 ) 2 + p 1 ( 2 x d 2 + ( x d 2 + y d 2 ) 2 ) + p 2 ( 2 x d y d ) y u p = k 1 y d ( x d 2 + y d 2 ) + k 2 y d ( x d 2 + y d 2 ) 2 + p 2 ( 2 y d 2 + ( x d 2 + y d 2 ) 2 ) + p 1 ( 2 x d y d )
where parameters (k1, k2) of radial distortion and parameters (p1, p2) of tangential distortion are used to model the camera. According to Equations (1)–(4), we can calculate the parameters of the camera model and the coordinates of points on the laser stripe.

2.3. Calibration of CSL Sensor

In the sensor calibration process, the intrinsic parameters of the camera and laser planes parameters need to be computed. Table 1 lists these corresponding parameters and their physical meanings. Where or-xryrzr denotes the robot coordinate system.
Table 1. Parameters of the CSL sensor.
Table 1. Parameters of the CSL sensor.
CategoryParametersPhysical Meaning
Camera intrinsic parameters(fx, fy)Focal length in the x, y direction
(u0, v0)Principle point coordinates
(k1, k2)Radial distortion parameters
(p1, p2)Tangential distortion parameters
Light plane equations(a1, b1, c1)Laser plane L1 equation coefficients
(a2, b2, c2)Laser plane L2 equation coefficients
l1ol2Angle between L1 and L2
Global parametersRcrRotation from oc-xcyczc to or-xryrzr
TcrTranslation from oc-xcyczc to or-xryrzr
The camera and the planar checkerboard target can move freely. The relative camera positions of the checkerboard among different poses are randomly located, as shown in Figure 3. At each position, the camera respectively captures an image with laser stripe and an image without laser stripe by on-off control of the laser projector. The image with laser stripe is used to calculate the equations of the laser planes, and the image without laser stripe is used to calibrate the intrinsic parameters. A known target and Zhang’s method [31] are used to determine the intrinsic parameters of the camera, in which a camera observes a planar checkerboard target from different perspectives. The calibration procedure gives a closed form solution, followed by a nonlinear refinement based on the maximum likelihood criterion.
Figure 3. Sensor calibration.
Figure 3. Sensor calibration.
Sensors 15 13725 g003
In Figure 3 the planar checkerboard target is also used to calibrate the laser planes. ot-xtytzt is a target coordinate system of the target plane and it is used as the global coordinate system during calibration. Under this condition, the z coordinates of all points of the target plane equals zero. From a 3D target plane coordinate system to 2D image coordinate system, the translation is given as:
s m ˜ = A [ r 1 r 2 r 3 t ] [ x t y t z t 1 ] T = A [ R t ] M ˜
where m ˜ = [ u v 1 ] T are the homogeneous coordinates of m = [ u v ] T ; M ˜ = [ x t y t z t 1 ] T are the homogeneous coordinates of M = [ x t y t z t ] T in the 2D target plane; s is an arbitrary scale factor; 3   ×   3 matrix R = ( r 1 r 2 r 3 ) and 3   ×   1 matrix t = ( t 1 t 2 t 3 ) T are the rotation matrix and translation vector between two coordinate systems, respectively. According to z t = 0 on the target plane, the translation between m ˜ and M ˜ is described as:
m ˜ = 1 s A [ r 1 r 2 t ] M ˜ = H M ˜
where the homography matrix H is given as H = [ h 1 h 2 h 3 ] T ( h i , i = 1, 2 or 3 is row vector of H); M ˜ = [ x t y t 1 ] T . H can be coarsely calculated with Equation (5) with at least four non-collinear points. Then a maximum likelihood estimation method is used to calculate the accurate H by minimizing the objective function:
F = min i m i 1 h 3 T M i [ h 1 T M i h 2 T M i ] 2
where m i = ( u i v i 1 ) T and M i = ( x i y i 1 ) T are the homogeneous coordinates of the ith feature point in image coordinate system and its corresponding point in target plane coordinate system, respectively. The minimizing problem can be solved by the Levenberg-Marquardt algorithm [32]. The initial value can be obtained from a linear least squares method.
Figure 4. Flowchart of the feature point coordinate calculation.
Figure 4. Flowchart of the feature point coordinate calculation.
Sensors 15 13725 g004
After the cameral calibration, we follow the flowchart shown in Figure 4 to calculate the equations of the laser planes. On the checkerboard, the intersection points of the edges of black/white squares and the stripes are defined as feature points, as shown in Figure 5. The checkerboard is placed at different positions to get the camera coordinates of sufficient points on the stripes (Figure 3). The space equations of laser planes L1 and L2 are fitted using a least square method (LSM).
Figure 5. Image used for calibration of laser planes. (a) Capturing image with laser stripes; and (b) extracting feature points on the stripes.
Figure 5. Image used for calibration of laser planes. (a) Capturing image with laser stripes; and (b) extracting feature points on the stripes.
Sensors 15 13725 g005

3. Laser Stripe Segmentation and Centre Points Localization

Outdoors the noise caused by sunlight has a distinguishing characteristic: the values of the R, G and B components are nearly equal. Sometimes the needed information of the image may not show clearly in RGB color space. Therefore, transforming the color space to a monochromatic value space may be helpful to highlight the characteristics of the laser and conveniently segment the stripe. In the paper, a linear transformation is used to obtain a monochromatic value image from RGB images.

3.1. Preprocessing Based on Monochromatic Value Space

In a discrete color image cij (with a size of M   ×   N ), color values of a pixel are given as three corresponding tristimulus Rij, Gij, and Bij. The linear transformation is defined as Equation (8):
I i j = ω r R i j + ω g G i j + ω b B i j
In the Equation (8), Iij is the desired monochromatic value image, and i [ 1 , 2 , ... , M ] , j [ 1 , 2 , ... , N ] , ωr, ωg, and ωb R. In order to segment the stripe the characteristics of laser should be take into consideration. An objective function needs defining to search the optimal ωr, ωg, and ωb.
Owing to the concentration and the strong brightness of the laser beam, one of the remarkable characteristics of laser is the high concentration of energy, which makes the stripe show a waveform with a few spikes (as shown in Figure 6b, a laser profile example, which is obtained through showing all the image row vectors). It means that the contrast between the laser stripe and background is high. The objective function should retain and enhance the feature so that it will become much easier to segment the laser stripe after transformation. Thus, the contrast can be used to construct the objective function. In Figure 6a, intensity values of the pixels within the laser stripe are close to their average. In these areas, the greater the energy concentration is, the higher the contrast will be and the higher the Kurtosis will be. Therefore, it is reasonable to define the contrast as kurtosis:
K = κ 4 κ 2 2 = μ 4 σ 4 3
In the Equation (9), kurtosis K is defined as the ratio of fourth-order cumulant (FOC) κ4 to square of the second-order cumulant (SOC) κ 2 2 . μ 4 and σ are respectively the fourth central moment and the standard deviation of the energy distribution of the laser.
Figure 6. Example of laser profiles. (a) Captured image including the cross lasers stripe on the weld line; (b) Superposition of the luminance values row by row.
Figure 6. Example of laser profiles. (a) Captured image including the cross lasers stripe on the weld line; (b) Superposition of the luminance values row by row.
Sensors 15 13725 g006
The transformation result of the stripe is expected to be an impulse-like signal which is orderly and furnished with high kurtosis. The background is of great disorder and low kurtosis. For instance, in a communication system, disorder is equivalent to the concept of entropy. There is a positive correlation between the entropy and the random nature of information. For this reason, Wiggins [33] first presented the minimum entropy deconvolution technique. He proposed to maximize a norm function called the Varimax Norm, which is equivalent to maximized kurtosis with assumed zero-mean [34]. The transformation model can be named as minimum entropy model. When K < 0, the Equation (9) can be modified as the Equation (10):
a b s ( K ) = | μ 4 σ 4 3   |
Thereupon, a maximized function of Equation (10) which is differentiable everywhere can be defined as the square of kurtosis, as shown in Equation (11):
max ( a b s ( K ) ) = | μ 4 σ 4 3 | max ( K 2 ) = ( μ 4 σ 4 3 ) 2
If the image is taken as a multi-channel signal (with N segments and M elements per segment), the Kurtosis can be written as:
K 2 = ( j = 1 N i = 1 M ( I i j μ j ) 4 ( i = 1 M ( I i j μ j ) 2 ) 2 3 ) 2
In the Equation (12), μj is the mean of column j in the transform image Iij. To obtain the solution of the maximizing Equation (12), K and ω need to satisfy the Equation (13):
K ω r = 0 ,   K ω g = 0 ,   K ω b = 0  
Obviously, it is difficult to solve Equation (13), but the maximum value of K can be approximately calculated according to [35]. An infinite color feature space set is determined by the continuous coefficients in Equation (8). ωr, ωg, and ωb can be learned from training data by the maximum likelihood estimation or the maximum posteriori estimation. For the convenience of calculation, ωr, ωg, and ωb are discretized as integers, and their value range is limited in [−2, 2]. Then Equation (13) is solved by the exhaustion method. Considering that red laser light is used in the experiments, the R component of the captured image has higher intensity, so it is reasonable to define ωr ≥ 0, that is, ωr {0, 1, 2}, ωg, ωb {−2, −1, 0, 1, 2}, (ωr, ωg, ωb) ≠ (0, 0, 0). Then, the parameters can be solved by the traversal method.
Figure 7. The color transport result: (a) captured image; (b) R component; (c) Grayscale;(d) the monochromatic value of R-G.
Figure 7. The color transport result: (a) captured image; (b) R component; (c) Grayscale;(d) the monochromatic value of R-G.
Sensors 15 13725 g007
In Figure 7, the optimal coefficient vector of the monochromatic value image is (ωr, ωg, ωb) = (1, −1, 0), and the monochromatic value space is R-G. Compared with the color value R of the stripe and grayscale (i.e., (ωr, ωg, ωb) = (0.30, 0.59, 0.11)) of the original image, R-G is more effective in suppressing noise and improving the SNR. On the one hand, it distinguishes the laser from the background more efficiently. On the other hand, it has much smaller amount of data (one-third of Figure 7a) than that of the original image.

3.2. Stripe Segmentation Based on Minimum Entropy Deconvolution(MED)

Ideally, the horizontal (and vertical) illumination distributions of stripes are independent, and their intensity conforms to a Gaussian distribution. When the laser projects onto the surface of an object, the distribution form will not change, but the stripe will deform with the change of the object surface geometrical shape. In an ideal situation, the laser peaks are in the center of the stripe composed of the intensity maximum along each column (or row). The laser stripe can be segmented only according to the peaks. However, in real environments, captured images contain various noises such as laser speckle, ambient noise, electrical noise, quantization noise, energy diffusion and excessive saturation [23]. Because of these noises, it is difficult to produce reliable stripe centers by simply calculating the maximum intensity along each profile of the laser stripe. The laser intensity does not always conform to Gaussian distribution and the maximum-valued locations of some columns (or rows) may not be in the center of the stripe. In Figure 8, it shows that some points off the center of stripe replace the stripe’s center points as the maximum points. From the prospect of signal processing, the structure information and the consistency of laser stripe signal are destroyed. Hence some actions should be taken to enhance the energy concentration of the laser stripe to return to their original locations.
Figure 8. Laser peaks and their locations.
Figure 8. Laser peaks and their locations.
Sensors 15 13725 g008
After an appropriate monochromatic value space is chosen, to improve the image quality, the next step is to reset the peak points (theoretical maximum) back to their original positions. This process can be described as 2D image deconvolution. Figure 9 shows the algorithmic model [33,35,36].
Figure 9. The model for 2D deconvolution.
Figure 9. The model for 2D deconvolution.
Sensors 15 13725 g009
It can be generally formulated as:
I = U ( H S )
B = W I
Because the image is a set of the discrete points, Equations (14) and (15) are described as the following equations:
I i j = U ( k , l H k l S i k + 1 , j l + 1 )
B i j = k , l W k l I i k + 1 , j l + 1
In the Equations (14)–(17):
  • S denotes the laser stripe;
  • H denotes the point spread function of the optical imaging system;
  • U denotes a noise function;
  • I denotes the acquired image;
  • * denotes the 2D convolution operator;
  • (i, j) is discrete spatial coordinates;
  • W denotes the finite impulse response (FIR) filter, W = 0 if i < 1or j < 1, and W*H = δi-Δi,j-Δj, where δij is the Krönecker delta (discrete impulse signal) [37], and Δi, Δj are the phase delay;
  • B denotes the recovered image.
The goal of solving the deconvolution is to find the convolution kernel W by the maximized kurtosis K, so that Bij αSi-Δi,j-Δj, in which α is a scale factor. MED is an effective technique for deconvolving the impulsive sources from a mixture signals. In [38], an iterative deconvolution approach is proposed. A FIR filter is used to minimize the entropy of the filtered signal, i.e., it searches for an optimum set of filter coefficients, which can recover the output signal with the maximum value of Kurtosis. This process will eventually enhance energy concentration and the structured information in the output signal [39,40,41], recovering an impulsive signal which will be more consistent than before.
Taking the input image as a multi-channel signal in columns (with N segments and M elements per segment) or in rows (with M segments and N elements per segment), in the former case, the horizontal component of the laser stripe is largely restored, and meanwhile, the vertical component information is suppressed, and vice versa. Then the whole recovered information of the laser stripe can be obtained by executing the two operations separately. The model in columns to extract the horizontal laser line can be formulated as:
B i j = k = 1 L W k I i k + 1 , j , ( k = 1 , ... , L )
O ( W k ) = ( j = 1 N i = 1 M ( B i j μ B j ) 4 ( i = 1 M ( B i j μ B j ) 2 ) 2 3 ) 2
In Equations (18) and (19), μBj is the mean of column j of Bij, and L is the order of the filter, both of which have significant impact on the MED outputs. The form of objective function can be described by Equation (19).
The above model is similar to Wiggins’ method except for the objective function, and it will not influence the solution procedure. The MED searches an optimum set of filter Wk coefficients that recover the output signal with the maximum value of kurtosis. For convenience, the filter can be normalized as:
k = 1 L W k 2 = 1
In accordance with this constraint, it is feasible to conclude that Iij is converted into Bij, with its energy preserved and its entropy reduced. The reason can be found through explaining Equation (18) mathematically. In Equation (18), it can be seen that Bij is obtained by weighting, shifting, overlapping and adding the corresponding components of Iij (1 < L < N). (i) If L = 1, the filter has no impact on the output; (ii) if LN, the last LN + 1 elements has no impact on the output; (iii) it is a well-posed estimation problem because there are fewer parameters than those in [42]. Generally, the greater L is, the more easily the kurtosis K converges to a high value. An appropriate L should balance kurtosis, the energy and the computation. Therefore, from the prospect of minimal entropy or maximal kurtosis, the criterion is not the optimal but an acceptable result. Gonzalez suggests that the L value should lie between 50% and 100% of the number of elements per segment [38]. In our experiments, L is empirically defined as 50% which satisfy the experiment requirements. The deconvolution model (Figure 9) of this problem can be described as Figure 10.
Figure 10. The MED model of the multichannel signal.
Figure 10. The MED model of the multichannel signal.
Sensors 15 13725 g010
The extremum of Equation (19) is obtained by Equation (21):
O ( W k ) W k = 0
An iteratively converging local-maximum solution can be derived as:
l = 1 L W l j = 1 N K j U j 1 i = 1 M I i l , j I i k , j = j = 1 N U j 2 i = 1 M ( B i j μ B j ) 3 I i k , j
where:
U j = i = 1 M ( B i j μ B j ) 2
K j = i = 1 M ( B i j μ B j ) 4 ( i = 1 M ( B i j μ B j ) 2 ) 2
and Wk is iteratively selected. The general procedure is listed in Table 2.
Table 2. General Procedure of MED.
Table 2. General Procedure of MED.
StepAlgorithm
1Initializing the adaptive FIR filter, and setting Wk = [11...1...11]/ L , K = 0.
2Computing the output signal Bij according to Equation (11).
3Inputting Bij to Equations (22)–(24), Wk is obtained.
4Inputting Bij to Equation (19) to compute kurtosis K and Δ K .
5Repeating step 2 and 3 to make sure that a specified number of iterations is achieved and that the change in K between iterations is less than a specified small value.
Figure 11. The comparison of energy concentration before and after MED processing. (a) waveform of the input signal in the columns; (b) waveform of the output signal in the columns.
Figure 11. The comparison of energy concentration before and after MED processing. (a) waveform of the input signal in the columns; (b) waveform of the output signal in the columns.
Sensors 15 13725 g011
Figure 11 shows the comparison of energy concentration between the input signal and output signal (deconvolved signal by MED).
Obviously, after MED processing, SNR is higher and the peaks of laser stripe are much steeper. For the segmentation of the vertical laser stripe, its process is the same as that of the horizontal laser stripe.

3.3. Centre Points Localization of Laser Stripe

The comparison between a group of delay in rows (or columns) and the ideal curve should be taken into account. This phase shift can be computed according to 2D convolution theorem. The Equation (25) can be obtained by Equation (18):
W k I i j = [ B i j Z ]
where Z is a (L − 1) × N zeros matrix, and:
Δ i = P I + P W , Δ j = 0
in which PI and PW are the phases of Iij and Wk, respectively. Δi and Δj are vertical and horizontal displacements from the segmented stripe to the original stripe, respectively. In the same way, the vertical phases and displacements can be calculated. Therefore, to segment the vertical laser stripe in rows has the same process.
Based on the above results of stripe segmentation and illumination restoration, Steger’s method is used to extract the centre points of laser stripes [25]. The method provides good localization results of centre points using Gaussian filtering and edge detection. Figure 12 shows the localization results of the centre points.
Figure 12. Localization results of the centre points.
Figure 12. Localization results of the centre points.
Sensors 15 13725 g012

4. Results and Discussion

Three experiments are conducted, which are the segmentation of the stripe in a weld line image, 3D measurement of the weld line, and tracking the weld line on wall-climbing robot. The proposed sensor and approach are tested in five sets of video data captured from the robot on a wind power tower. The configuration of the CSL sensor is listed in Table 3. These algorithms are implemented in C++ and are tested on an Intel Core i3-2130CPU of 3.4 GHz.
Table 3. Configuration of the CSL sensor.
Table 3. Configuration of the CSL sensor.
DeviceParameters
CameraCCD: SONY: 1/4 inch
Resolution: 640 × 480 pixels
Pixel size: 5.6 μm × 5.6 μm
Frame rate: 20 fps
Focal length: 8 mm
Field of view: 43.7°
Laser projectorSize: ϕ 9 × 23 mm
Wavelength: 700 nm
Operating voltage: DC 5 V
Operating current: 20–50 mA
Output power: 250 mW
Fan angle: 60°

4.1. CSL Sensor Calibration

A checkerboard is used as a target board of the cameral calibration. On the checkerboard, the side length of each black/white square is equal to 25 mm. There are 10 × 7 corner points on the target board. There are nine captured images of target board in different positions, as shown in Figure 13. The sets of images are used to calibrate camera and calculate equations of laser planes. The coordinates of feature points of laser stripe are extracted by the method in Section 3. Based on the method described in Section 2, the feature points of camera coordinate system are calculated, and then the equations of laser planes are fitted. A plane equation can be obtained by at least three non-collinear feature points. The calibration results of the CSL sensor are listed in Table 4.
Figure 13. Calibration checkerboard with laser stripe.
Figure 13. Calibration checkerboard with laser stripe.
Sensors 15 13725 g013
Table 4. The calibration results of the CSL sensor.
Table 4. The calibration results of the CSL sensor.
CategoryParametersValues
Camera intrinsic parameters(fx, fy)(922.4350, 917.3560)
(u0, v0)(329.1680, 2705660)
(k1, k2)(−291.459 × 10−3, 157.027 × 10−3)
(p1, p2)(−0.1354 × 10−3, −0.2682 × 10−3)
Light plane equations(a1, b1, c1)(−0.18 × 10−3, 1.86 × 10−3, 1.39 × 10−3)
(a2, b2, c2)(−90.11 × 10−3, 2.463 × 10−3, 8.935 × 10−3)
l1ol289.9981°
Global parametersRcr [ 0.680 0.961 0.001 0.732 0.704 0.027 0.019 0.705 0.999 ]
Tcr [ 1.618 350.480 59.871 ] T
It can be seen in Table 5 that the dot product of the coefficients of the two plane equations is equal to a1a2 + b1b2 + c1c2 = 3.322 × 10−5, which is close to zero. The measurement accuracy of the CSL is evaluated by comparing standard values and measured values [43]. The standard value is the coordinates of the intersection points of the target board and rays of light between the optical center and feature points, as shown in Figure 3. The measured value is the coordinates of the intersection points of the laser planes and rays of light between optical center and feature point. In a random position, 18 feature points are selected to evaluate measurement accuracy of the system using the laser plane equations.
Table 5. Measurement accuracy.
Table 5. Measurement accuracy.
Image CoordinatesStandard ValueMeasured ValueErrors of Coordinates
(u,v)/(pixels)x (mm)y (mm)z (mm)x (mm)y (mm)z (mm)Δx (mm)Δy (mm)Δz (mm)
434.812, 216.242224.751−61.644237.893224.542−61.586237.671−0.2090.058−0.222
521.702, 339.656208.124−60.198231.686207.856−60.121231.388−0.2680.077−0.298
520.861, 304.699191.494−58.802225.479191.424−58.781225.397−0.0700.021−0.082
519.817, 272.006174.863−57.407219.272174.962−57.439219.3950.099−0.0320.123
518.237, 238.050166.850−56.695216.280166.851−56.695216.2810.00100.001
516.309, 220.555158.236−55.971213.065158.309−55.996213.1630.073−0.0250.098
515.171, 204.063141.610−54.515206.858141.588−54.507206.826−0.0220.008−0.032
512.486, 170.342124.986−53.019200.650124.763−52.925200.291−0.2230.094−0.359
508.894, 138.080177.33257.495216.540177.26257.472216.455−0.070−0.023−0.085
577.181, 225.223175.09732.586216.503175.01432.571216.399−0.083−0.015−0.104
565.821, 224.684172.6897.687216.400172.6957.687216.4080.00600.008
554.325, 223.663170.521−17.226216.388170.473−17.221216.328−0.0480.005−0.060
539.553, 223.101168.219−42.131216.325168.197−42.126216.297−0.0220.005−0.028
525.946, 222.546165.957−67.039216.278165.937−67.031216.251−0.020.008−0.027
510.778, 220.525163.709−91.947216.235163.683−91.932216.200−0.0260.015−0.035
494.025, 219.462161.487−116.857216.202161.441−116.823216.14−0.0460.034−0.062
477.062, 218.398159.212−141.763216.150159.175−141.730216.099−0.0370.033−0.051
457.027, 217.322156.884-166.667216.078156.884−166.667216.078000
RMS errors (mm)------------0.0940.0340.120
Table 5 lists the measurement accuracy evaluation data. There are Root-Mean-Square (RMS) errors of (0.094, 0.034, 0.120) in the three directions. The measurement errors are shown in Figure 14 and Figure 15.
Figure 14. Measurement errors in x, y and z directions.
Figure 14. Measurement errors in x, y and z directions.
Sensors 15 13725 g014
Figure 15. The absolute errors in feature points.
Figure 15. The absolute errors in feature points.
Sensors 15 13725 g015

4.2. Accuracy and Speed of Stripe Segmentation

This paper quantitatively contrasts the performance of the proposed approach (MED), Centre of mass (CM), Linear approximation (LA), quadratic approximation (QA), and Akima splines approximation (AA). The different monochromatic value spaces (R or R-G), absolute errors and running time are computed, respectively.
Figure 16. The segmentation result of the above methods in R component space and R-G space: (ae) and (fj) are results of CM, LA, QA, AA and MED in R and R-G respectively.
Figure 16. The segmentation result of the above methods in R component space and R-G space: (ae) and (fj) are results of CM, LA, QA, AA and MED in R and R-G respectively.
Sensors 15 13725 g016aSensors 15 13725 g016b
In the test images, the segmentation quality of the comparison algorithms and the proposed algorithm are shown in Table 6, Figure 16 and Figure 17. The runtime per image is the mean value obtained by the processing of 1000 images. As a whole, it can be seen from the following table and figures that (i) the component R-G is better than R in all methods; (ii) the method of segmenting laser stripe based on MED is more robust, accurate, and fast than other approaches. It is worth noting that the data of vertical laser stripe obtained in R is so tough that the time it consumed is too long to afford in practice. Taking this factor into account, it is unfeasible to use LA, QA, AA and MED to fit the vertical laser stripe.
Figure 17. The errors of the above methods in R and R-G: (ae) are errors of the horizontal stripe using CM, LA, QA, AA and MED in R and R-G respectively; (fj) are errors of the vertical stripe using CM, LA, QA, AA and MED in R and R-G respectively.
Figure 17. The errors of the above methods in R and R-G: (ae) are errors of the horizontal stripe using CM, LA, QA, AA and MED in R and R-G respectively; (fj) are errors of the vertical stripe using CM, LA, QA, AA and MED in R and R-G respectively.
Sensors 15 13725 g017aSensors 15 13725 g017b
Table 6. Laser Stripe Segmentation Performance.
Table 6. Laser Stripe Segmentation Performance.
Laser StripeColor SpaceMethodCMLAQAAAMED
Index
Horizontal Laser stripeRAverage error (mm)0.4320.6670.2710.3110.231
Running time (ms)18.3320.2168.1130.322.3
R-GAverage error (mm)0.3300.4160.2670.2910.231
Running time (ms)17.9314.0167.6196.620.9
Vertical Laser stripeRAverage error (mm)1.00166.71073.33470.35071.050
Running time (ms)18.820.6
R-GAverage error (mm)0.7000.4310.2950.3270.235
Running time (ms)17.6120.2147.2166.619.9
”means that the time is too long to afford in practice, num = 500.
Nonetheless, the error of MED overall is smaller than the former. Furthermore, they are smoother than the fitting data obtained by LA, QA, and AA in R-G. This phenomenon is determined by the more flat distribution of the former error, though most of its elements are larger and they have similar offsets relative to the true data. Another contribution of the proposed method is that the errors of the vertical stripe by MED are only larger than those of LA in R-G space, which results from different reasons. Firstly, the vertical laser stripe is fitted by linear approximation of LA, and it is a straight line in the image. Secondly, since only a few points with a lager error value have useful information, the breakpoints detection is absent in MED, which makes the method more effective and robust than LA. And the vertical laser stripe by MED is smoother than LA. Therefore, the performance of MED is better.
Figure 18 shows the laser stripes extraction results in four frames captured in the wild environment. Despite high illumination and varying intensity of light in the frames, the algorithm exhibits good performance. All the calculations are based on the full image. It can be seen that the speed of MED is fast enough to be applied in the real environments. If the region of interest is selected, the running speed will become faster.
Figure 18. Laser stripe segmentation results with different illumination interferences. (a,b) Cross and straight weld line in the sunshine; (c) The weld line with reflections surface; (d) The weld line in the shadow.
Figure 18. Laser stripe segmentation results with different illumination interferences. (a,b) Cross and straight weld line in the sunshine; (c) The weld line with reflections surface; (d) The weld line in the shadow.
Sensors 15 13725 g018

4.3. Weld Line Detection and Tracking of Wall Climbing Robot

In order to verify the feasibility of the CSL sensor, the calibrated sensor is tested on a wind power tower by a wall climbing robot platform, as shown in Figure 19. Figure 20 describes the conversion of the camera coordinate system, the robot coordinate system and the calibration target coordinate system. In which (Rcr, Tcr), (Rtr, Ttr) and (Rtc, Ttc) denote rotation matrixes and translational matrixes between three coordinate systems, respectively. The location of the weld line is detected by the method in [27]. According to the detection results, the robot can follow the weld line and perform NDT.
Figure 19. Wall climbing robot prototype. (a) Robot with the CSL device; (b) Robot working in vertical and horizontal direction.
Figure 19. Wall climbing robot prototype. (a) Robot with the CSL device; (b) Robot working in vertical and horizontal direction.
Sensors 15 13725 g019
In the paper, the measured object is the weld line of a wind power tower. The tower is a bulky cone with a height of about 30–50 m and a diameter of about 2.2–4 m. Therefore, the structured light sensor may not reconstruct 3D information of the whole tower. We just reconstruct the local 3D information around the weld line. Figure 21 shows the detection and measurement experiments in the wild environment. The robot coordinate system is supposed to be the global coordinate system. In practice, the speed of the wall climbing robot is about 30 mm/s (about 1.5 mm/frame given a video frame rate of 20 fps). For the captured image with 640 × 480 pixels, the difference between two successive frames is about 5~6 row pixels. Therefore, the percent overlap is about 99%. Under the circumstances of high illumination and excessive noise, the robustness of the sensor stands out. Figure 21b shows the extracted laser stripe on the weld lines. Figure 21c illustrates the detection results of the T-intersection weld lines and Figure 21d shows the measurement results of the T-intersection weld lines. Both vertical and horizontal weld lines appear in the video sequence. It can be seen that the CSL sensor can detect and measure both vertical and horizontal weld lines when the robot moves close to the intersection part of the two weld lines. From the image space to the object space the coordinate transforming relation is:
[ x r , y r , z r ] T = R c r K 1 [ u , v ,   1 ] T + T c r
Figure 20. The conversion of the camera coordinate system, the robot coordinate system and the calibration target coordinate system.
Figure 20. The conversion of the camera coordinate system, the robot coordinate system and the calibration target coordinate system.
Sensors 15 13725 g020
Figure 21. Measurement and 3D coordinates of straight and cross weld lines. (a) Measure straight weld line; (b) 3D coordinates of the straight weld line; (c) Measure cross weld line; (d) 3D coordinates of the cross weld line.
Figure 21. Measurement and 3D coordinates of straight and cross weld lines. (a) Measure straight weld line; (b) 3D coordinates of the straight weld line; (c) Measure cross weld line; (d) 3D coordinates of the cross weld line.
Sensors 15 13725 g021aSensors 15 13725 g021b
When the robot moves along the straight weld lines, the measured weld line locations can be used directly for navigation. When the robot moves close to T-intersection weld lines, we can get two convex arcs on the laser stripes, and the vertical and horizontal locations of the weld lines can be extracted simultaneously using the CSL sensor. In Figure 22, we show the navigation and motion trajectories of the robot at the T-intersection of the weld lines. The average tracking error is less than 2 mm to ideal central line.
Figure 22. Weld line tracking results of the wall climbing robot. (a) Central lines of two cross weld lines (ground-truth); (b) Motion trail of the robot; (c) The tracking errors.
Figure 22. Weld line tracking results of the wall climbing robot. (a) Central lines of two cross weld lines (ground-truth); (b) Motion trail of the robot; (c) The tracking errors.
Sensors 15 13725 g022
The camera does not have any optical filter, so that the segmentation process becomes more adaptive and lower cost compared with other ways used in the industry. Although a high noise level exists in low-quality images, the sensor and the proposed algorithm are robust enough to segment the laser stripe and detect weld lines accurately.

5. Conclusions

This paper proposes a structured light sensor with cross stripes and a segmentation algorithm of the stripe for vision navigation of a wall climbing robot in outdoor environments. The simple but effective cross structured light device provides an example for weld line measurement, while we provide a general methodology based on MED for a kind of measurement problems in industrial applications. The proposed algorithm chooses an adaptive monochromatic value space for preprocessing, and then recovers the laser stripe after the deconvolution process.
To process a color image with 640 × 480 pixels resolution, the average running time is about 20 ms. In cameral calibration, RMS of measurement errors is within 0.120 mm. In the field experiments, the absolute measurement error is less than 0.3 mm. In the robot navigation experiments, the average tracking error is less than 2 mm. The results of the experiments demonstrate that the designed sensor and the proposed algorithm have high accuracy, robustness and efficiency for measurement and navigation.

Acknowledgments

This work is supported by the Fundamental Research Funds for the Central Universities with No. HEUCF100604, the National Science Foundation of China under Grant No. 61202455 and Grant No. 61472096, and the Heilongjiang Postdoctoral Sustentation Fund with No. LBH-Z14055.

Author Contributions

Liguo Zhang wrote the paper and conceived the experiment; Guisheng Yin supervised the overall work and reviewed the paper; Jianguo Sun, Jing Zhao and Qilong Han contributed the experimental data and analysis.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Silberman, N.; Rob, F. Indoor scene segmentation using a structured light sensor. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Barcelona, Spain, 6–13 November 2011; pp. 601–608.
  2. Park, J.B.; Lee, S.H.; Lee, J. Precise 3D lug pose detection sensor for automatic robot welding using a structured-light vision system. Sensors 2009, 9, 7550–7565. [Google Scholar] [CrossRef] [PubMed]
  3. Huang, W.; Radovan, K. A laser-based vision system for weld quality inspection. Sensors 2011, 11, 506–521. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, X.; Liu, H.; Yu, Y.; Xu, X.; Hu, W.; Li, M.; Ou, J. Bridge Displacement Monitoring Method Based on Laser Projection Sensing Technology. Sensors 2015, 15, 8444–8463. [Google Scholar] [CrossRef] [PubMed]
  5. Usamentiaga, R.; Molleda, J.; Garcia, D.F. Structured-Light Sensor Using Two Laser Stripes for 3D Reconstruction without Vibrations. Sensors 2014, 11, 20041–20063. [Google Scholar] [CrossRef] [PubMed]
  6. Barone, S.; Alessandro, P.; Armando, V.R. 3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework. Sensors 2012, 12, 16785–16801. [Google Scholar] [CrossRef] [PubMed]
  7. Zhan, D.; Yu, L.; Xiao, J.; Chen, T. Multi-Camera and Structured-Light Vision System (MSVS) for Dynamic High-Accuracy 3D Measurements of Railway Tunnels. Sensors 2015, 15, 8664–8684. [Google Scholar] [CrossRef] [PubMed]
  8. Bieri, L.S.; Jacques, J. Three-dimensional vision using structured light applied to quality control in production line. Proc. SPIE 2004, 5457. [Google Scholar] [CrossRef]
  9. Usamentiaga, R.; Molleda, J.; García, D.F.; Bulnes, F.G. Machine vision system for flatness control feedback. In Proceedings of the IEEE International Conference on Machine Vision, Dubai, The United Arab Emirates, 28–30 December 2009; pp. 105–110.
  10. Appia, V.; Pedro, G. Comparison of fixed-pattern and multiple-pattern structured light imaging systems. Proc. SPIE 2014, 8909. [Google Scholar] [CrossRef]
  11. Gupta, M.; Qi, Y.; Nayar, S.K. Structured light in sunlight. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 545–552.
  12. O’TOOLE, M.; John, M.; Kutulakos, K.N. 3D shape and indirect appearance by structured light transport. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3246–3253.
  13. Liu, D.; Cheng, X.; Yang, Y.-H. Frequency-Based 3D Reconstruction of Transparent and Specular Objects. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 660–667.
  14. Molleda, J.; Usamentiaga, R.; García, D.F.; Bulnes, F.G. Real-time flatness inspection of rolled products based on optical laser triangulation and three-dimensional surface reconstruction. J. Electron. Imaging 2010, 19, 031206. [Google Scholar] [CrossRef]
  15. Usamentiaga, R.; Molleda, J.; Garcia, D.F.; Bulnes, F.G. Removing vibrations in 3D reconstruction using multiple laser stripes. Opt. Lasers Eng. 2014, 53, 51–59. [Google Scholar] [CrossRef]
  16. Fisher, R.B.; Naidu, D.K. A comparison of algorithms for subpixel peak detection. In Image Technology; Springer: Berlin/Heidelberg, Germany, 1996; pp. 385–404. [Google Scholar]
  17. Haug, K.; Pritschow, G. Robust laser-stripe sensor for automated weld-seam-tracking in the shipbuilding industry. In Proceedings of the IEEE Annual Conference of the Industrial Electronics Society, Aachen, Germany, 31 August–4 September 1998; Volume 2, pp. 1236–1241.
  18. Strobl, K.H.; Sepp, W.; Wahl, E.; Bodenmuller, T.; Suppa, M.; Seara, J.F.; Hirzinger, G. The DLR multisensory hand-guided device: The laser stripe profiler. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 1927–1932.
  19. Li, Y.; Li, Y.F.; Wang, Q.L.; Xu, D.; Tan, M. Measurement and defect detection of the weld bead based on online vision inspection. IEEE Trans. Instrum. Meas. 2010, 59, 1841–1849. [Google Scholar]
  20. Molleda, J.; Usamentiaga, R.; Garcia, D.F.; Bulnes, F.G.; Ema, L. Shape measurement of steel strips using a laser-based three-dimensional reconstruction technique. IEEE Trans. Ind. Appl. 2011, 47, 1536–1544. [Google Scholar] [CrossRef]
  21. Usamentiaga, R.; Molleda, J.; García, D.F. Fast and robust laser stripe extraction for 3D reconstruction in industrial environments. Mach. Vis. Appl. 2012, 23, 179–196. [Google Scholar] [CrossRef]
  22. Ofner, R.; O’Leary, P.; Leitner, M. A collection of algorithms for the determination of construction points in the measurement of 3D geometries via light-sectioning. In Workshop on European Scientific and Industrial Collaboration: Advanced Technologies in Manufacturing; University of Wales College: Newport, UK, 1999; pp. 505–512. [Google Scholar]
  23. Forest, J.; Salvi, J.; Cabruja, E.; Pous, C. Laser stripe peak detector for 3D scanners. A FIR filter approach. In Proceedings of the IEEE International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; Volume 3, pp. 646–649.
  24. Schnee, J.; Futterlieb, J. Laser line segmentation with dynamic line models. In Computer Analysis of Images and Patterns; Springer: Berlin/Heidelberg, Germany, 2011; pp. 126–134. [Google Scholar]
  25. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef]
  26. Xu, D.; Wang, L.; Tu, Z.; Tan, M. Hybrid visual servoing control for robotic arc welding based on structured light vision. Acta. Autom. Sin. 2005, 31, 596. [Google Scholar]
  27. Zhang, L.; Ye, Q.; Yang, W.; Jiao, J. Weld line detection and tracking via spatial-temporal cascaded hidden Markov models and cross structured light. IEEE Trans. Instrum. Meas. 2014, 63, 742–753. [Google Scholar] [CrossRef]
  28. Sturm, P.; Ramalingam, S.; Tardif, J.P.; Gasparini, S.; Barreto, J. Camera models and fundamental concepts used in geometric computer vision. Found. Trends Comp. Graph. Vis. 2011, 6, 1–183. [Google Scholar] [CrossRef]
  29. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  30. Medioni, G.; Kang, S.B. Emerging Topics in Computer Vision; Prentice Hall PTR: New York, NY, USA, 2004. [Google Scholar]
  31. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  32. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1978; pp. 105–116. [Google Scholar]
  33. Wiggins, R.A. Minimum entropy deconvolution. Geoexploration 1978, 16, 21–35. [Google Scholar] [CrossRef]
  34. McDonald, G.L.; Zhao, Q.; Zuo, M.J. Maximum correlated Kurtosis deconvolution and application on gear tooth chip fault detection. Mech. Syst. Signal Process. 2012, 33, 237–255. [Google Scholar] [CrossRef]
  35. Collins, R.T.; Liu, Y.; Leordeanu, M. Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1631–1643. [Google Scholar] [CrossRef] [PubMed]
  36. Ohta, Y.I.; Kanade, T.; Sakai, T. Color information for region segmentation. Comput. Graph.Image Process. 1980, 13, 222–241. [Google Scholar] [CrossRef]
  37. Bronstein, M.M.; Bronstein, A.M.; Zibulevsky, M.; Zeevi, Y.Y. Blind deconvolution of images using optimal sparse representations. IEEE Trans. Image Process. 2005, 14, 726–736. [Google Scholar] [CrossRef] [PubMed]
  38. González, G.; Badra, R.E.; Medina, R.; Regidor, J. Period estimation using minimum entropy deconvolution (MED). Signal Process. 1995, 41, 91–100. [Google Scholar] [CrossRef]
  39. Sawalhi, N.; Randall, R.B.; Endo, H. The enhancement of fault detection and diagnosis in rolling element bearings using minimum entropy deconvolution combined with spectral kurtosis. Mech. Syst. Signal Process. 2007, 21, 2616–2633. [Google Scholar] [CrossRef]
  40. Nandi, A.K.; Mämpel, D.; Roscher, B. Blind deconvolution of ultrasonic signals in nondestructive testing applications. IEEE Trans. Signal Process. 1997, 45, 1382–1390. [Google Scholar] [CrossRef]
  41. Boumahdi, M.; Lacoume, J.L. Blind identification using the Kurtosis: Results of field data processing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Detroit, MI, USA, 9–12 May 1995; Volume 3, pp. 1980–1983.
  42. Donoho, D. On minimum entropy deconvolution. In Applied Time Series Analysis II; Elsevier: Amsterdam, The Netherlands, 1981; pp. 565–608. [Google Scholar]
  43. Zhou, F.; Peng, B.; Cui, Y.; Wang, Y.; Tan, H. A novel laser vision sensor for omnidirectional 3D measurement. Opt. Laser Technol. 2013, 45, 1–12. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zhang, L.; Sun, J.; Yin, G.; Zhao, J.; Han, Q. A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot. Sensors 2015, 15, 13725-13751. https://doi.org/10.3390/s150613725

AMA Style

Zhang L, Sun J, Yin G, Zhao J, Han Q. A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot. Sensors. 2015; 15(6):13725-13751. https://doi.org/10.3390/s150613725

Chicago/Turabian Style

Zhang, Liguo, Jianguo Sun, Guisheng Yin, Jing Zhao, and Qilong Han. 2015. "A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot" Sensors 15, no. 6: 13725-13751. https://doi.org/10.3390/s150613725

Article Metrics

Back to TopTop