Next Article in Journal
Reflection-Aware Generation and Identification of Square Marker Dictionaries
Previous Article in Journal
Hybrid Satellite–Terrestrial Networks toward 6G: Key Technologies and Open Issues
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Trajectory Recognition and Control Technology of Real-Time Tracking Welding

1
Key Laboratory of Automobile Materials, School of Materials Science and Engineering, Jilin University, Changchun 130025, China
2
School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130025, China
3
Key Laboratory of Solidification Control and Digital Preparation Technology (Liaoning Province), School of Materials Science and Engineering, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8546; https://doi.org/10.3390/s22218546
Submission received: 20 September 2022 / Revised: 23 October 2022 / Accepted: 28 October 2022 / Published: 6 November 2022
(This article belongs to the Section Physical Sensors)

Abstract

:
Real-time tracking welding with the assistance of structured light vision enhances the intelligence of robotic welding, which significantly shortens teaching time and guarantees accuracy for user-customized product welding. However, the robustness of most image processing algorithms is deficient during welding practice, and the security regime for tracking welding is not considered in most trajectory recognition and control algorithms. For these two problems, an adaptive feature extraction algorithm was proposed, which can accurately extract the seam center from the continuous, discontinuous or fluctuating laser stripes identified and located by the CNN model, while the prior model can quickly remove a large amount of noise and interference except the stripes, greatly improving the extraction accuracy and processing speed of the algorithm. Additionally, the embedded Pauta criterion was used to segmentally process the center point data stream and to cyclically eliminate outliers and further ensure the accuracy of the welding reference point. Experimental results showed that under the guarantee of the above-mentioned seam center point extraction and correction algorithms, the tracking average error was 0.1 mm, and even if abnormal trajectory points existed, they did not cause welding torch shaking, system interruption or other accidents.

1. Introduction

The recognition and control of welding trajectory is a major problem in intelligent welding, and structured light vision is an efficient way to deal with it. With the assistance of structured light vision sensors, teaching-playback robots are capable of intelligently realizing the automatic planning of welding trajectory by laser multi-point positioning [1,2], pre-welding trajectory fitting [3] and real-time tracking [4], of which real-time tracking is the most important technical means. A highly robust image processing algorithm for feature point extraction is a technical prerequisite for the tracking system, but it brings reservations in time-consuming practical welding, where the tracking reference points are not extracted 100% correctly due to external dynamic interference and the number of outliers increases with the welding time. Therefore, a stable image processing algorithm, as well as a flexible outlier tracking point removal algorithm, must be incorporated into the tracking regime at the same time.
The robot’s motion control model based on coordinate points is relatively robust and mature, but the extraction of the seam center would be strongly disturbed by arc, splash, light, etc. The extraction of the center point is mainly divided into two steps: automatically locating the laser stripe region of interest (ROI) and detecting feature points from ROI. In order to be able to shorten the image processing cycle and improve the accuracy of feature point extraction, a target tracking algorithm is used to acquire ROI. There are two main forms of regions of interest: large pixel regions containing joint information with high adaptability and flexibility and tiny pixel regions containing single feature points with high specificity and personalization. For thick plates with large bevels, the strong arc splash across the bevel gap causes violent reflected light at the bevel inflection point, and the algorithm aiming at tracking the node (bevel inflection point) has a certain feature point extraction error, generally within 10 pixels, but the actual error can be up to 1 mm (the magnification factor in the camera vision middle area is 1 pixel ≈ 0.1 mm). Therefore, the algorithm that tracks the bevel was used in this study to further reduce the error of feature point extraction by the complete laser stripe curve. A weighted local cosine similarity algorithm was adopted to detect seam laser area [5]. Zou et al. [6] used the spatiotemporal context (STC) object tracking algorithm to obtain the processing object of feature point extraction. The recurrent neural network (RNN) was used to learn the temporal context information of convolutional features to accurately detect the seam region [7]. Zou et al. [8] proposed a continuous convolution operator tracker (CCOT) object-tracking algorithm to get the object of image processing. Li et al. [9] searched the profile of the welding seam in a small area by using a Kalman filter. The location of welding points in the presence of large distractions from ambient illumination, metal dust, splash and strong arc needs to be rapidly and accurately determined to improve the robustness of laser stripe centerline extraction and seam center extraction algorithms. Appropriate target-tracking algorithms can significantly improve the robustness of the welding system with auxiliary light sources. Some target detection algorithms, such as KCF [10], SVM [11], and ERFNet [12], have complicated training processes; therefore, a real-time object detection algorithm based on CNN [13] was used in this research to rapidly and accurately determine the region of interest, including bevel laser stripes.
These ROI tracking and prediction algorithms are highly robust, but many of the algorithms for detecting feature points from the joint region need to be optimized to meet the requirements of actual welding conditions. Zhang et al. [14,15] detected the laser stripe based on the connected region theory, but the largest connected region did not always represent the laser stripe during welding (as outlined in Figure 1), and the theory easily removed important information. Wu et al. [16] used the Hough linear detection method to obtain the laser stripe centerline equation. Jawad et al. [17] improved the Otus and line detection algorithms to extract feature points. A precise Hough transform algorithm was designed by Fan et al. [18] to extract the seam center. Furthermore, Xiao et al. [19] extracted the seam center based on the inflection point theory after detecting the laser stripe by the Steger algorithm. However, the feature extraction algorithm based on the theoretical laser stripe model (Figure 2a) was prone to failure when there were holes, jumping points, missing feature points and other defects in the laser stripe, as shown in Figure 2b–d. For these reasons, an adaptive feature extraction algorithm was investigated in order to extract the seam center from the bevel laser stripe with various defect characteristics under actual welding.
Although seam area prediction models and seam center extraction algorithms proposed by Zhang et al. [20,21,22] could extract the motion reference point to a large extent, the accuracy of these models and algorithms was not 100%. Due to the irresistible, dynamic, irregular and uncertain interference in the acquisition of accurate feature points, there was a high probability that the original laser image lacked valid information, and thus the authentic seam feature points were not obtained, which would lead to the chaos of motion program and was why the technology suddenly failed during the welding process. Factors such as strong arc spatter, sensor contamination and ambient light could cause the target tracking model and feature point extraction algorithm to extract the wrong seam center point, so the center point data stream obtained through image processing should not be directly transmitted to the tracking welding controller, for which the security or robustness guarantee mechanism for analyzing the authenticity and accuracy of feature points needs to be put forward.
Taking the V-shaped seam as the research object, it was proposed to use the CNN model to automatically locate the ROI of laser stripes. An adaptive feature extraction algorithm based on row and column scanning extracted the seam center point. The prior model further improved the extraction accuracy of feature points at the pixel level. Subsequently, the embedded Pauta criterion [23] processed the center point data stream to ensure the authenticity and accuracy of tracking reference points by eliminating outliers. Finally, the adaptability of these algorithms was discussed. Experiments were conducted in a real-time tracking welding system to verify the robustness and accuracy of the algorithms.

2. Structured Light Vision Welding System

A structured light vision welding system was independently developed, which consisted of a six-DOF welding robot (Yaskawa, AR1440, Liaoning Dazheng Intelligent Robot Co. Ltd., Liaoning, China), a structured light vision sensor [24], a robot control cabinet (YRC1000, Liaoning Dazheng Intelligent Robot Co. Ltd., Liaoning, China), an arc welder (RD350S, Liaoning Dazheng Intelligent Robot Co. Ltd., Liaoning, China), an industrial PC and a console, as shown in Figure 3. The structured light vision sensor was based on the coupling of the camera (MV-CA050-20UM, Hangzhou Hikvision Digital Technology Co., Ltd., Hangzhou, China), an infrared laser (HD650AB100-16GD-WLD, Shenzhen Infrared Laser Technology Co., Ltd., Shenzhen, China) and a bandpass filter (FU-650LGP-Y34, Shenzhen Infrared Laser Technology Co., Ltd., Shenzhen, China) attached to the camera lens. The camera and the torch are coaxial, and the laser forms an angle of 25° with the axis of the camera. The intersection of the laser’s optical axis and the camera’s axis was positioned at 170.33 mm from the camera lens. In addition, the look-ahead distance between the laser line and welding arc was 50.5 mm when real-time tracking welding.
Figure 4 is a schematic of robot automatic welding, which illustrates the relationship between the pixel coordinate system OPXPYP, the tool coordinate system OHXHYHZH and the robot base coordinate system OBXBYBZB. On the basis of the structured light perspective projection imaging model, the image coordinate system OIXIYI existed on the OPXPYP plane, and the coordinate origin was located at the pixel center; in addition, the camera coordinate system OCXCYCZC was located at a distance f (focal length) directly above the image coordinate system. When P was assumed as a point on this laser line, then its coordinates in the OIXIYI coordinate system, OPXPYP coordinate system and OCXCYCZC coordinate system were (u, v), (c, r) and (xc, yc, zc), respectively.
The transformation between (xc, yc, zc) and (u, v) was established via the trigonometry principle as follows:
[ u v 1 ] = 1 z c [ f 0 0   0 f 0   0 0 1   0 0 0 ] [ x C y C z C 1 ]
Under non-ideal conditions, the real coordinates (ud, vd) of the P point in the image coordinate system OIXIYI were geometrically distorted from the theoretical coordinates (u, v) [25], and they were related as follows:
[ u d v d ] = 2 1 + 1 4 k ( u 2 + v 2 ) [ u v ]
where k is the distortion coefficient.
For obtaining the coordinates of the P point, the camera coordinate system OCXCYCZC, the transformation between the pixel coordinate system OPXPYP and the image coordinate system OIXIYI needed to be established as follows [26]:
[ c r 1 ] = [ 1 / S x 0 u 0 0 1 / S y v 0 0 0 1 ] [ u d v d 1 ]
where Sx, and Sy represent the imaging magnification coefficients in the horizontal and vertical directions, respectively. (u0, v0) represents the pixel coordinates of the intersection of the optical axis and photosensitive chip.
To determine the exact mapping relationship between 2D pixel coordinates (c, r) and 3D camera coordinates (xc, yc, zc), a constraint equation needed to be established by light plane calibration, as follows:
A x C + B y C + C z C D = 0
The transformation between (xc, yc, zc) and (c, r) can be derived using Equations (1)–(4):
{ x C = D ( S x c u 0 S x ) A ( S x c u 0 S x ) + B ( S y r v 0 S y ) + C f δ y C = D ( k y r v 0 k y ) A ( S x c u 0 S x ) + B ( S y c v 0 S y ) + C f δ z C = D f A ( S x c u 0 S x ) + B ( S y r v 0 S y ) + C f δ
where δ = k u 0 2 S x 2 2 k u 0 S x 2 c + k v 0 2 S y 2 2 k v 0 S y 2 r + k S x 2 c 2 + k S y 2 r 2 + 1 , (Sx, Sy, k, u0, v0) are collectively referred to as camera internal parameters, which were determined by the HALCON machine vision library calibration and (A, B, C, D) are laser line parameters, which were obtained by optical plane calibration.
The mapping relationship between the coordinates of the P point in the world coordinate system OBXBYBZB and the coordinates in the camera coordinate system OCXCYCZC is shown in Equation (6).
[ x B y B z B 1 ] = [ R     T 0     1 ] X S [ x C y C z C 1 ]
where XS is the hand-eye transformation matrix (i.e., the transformation matrix of the camera coordinate system OCXCYCZC to the tool coordinate system OHXHYHZH) obtained from the hand-eye calibration. The expressions of R and T are as follows:
R = R 3 · R 2 · R 1
T = [ X   Y   Z ] T
R 1 = [ 1 0 0   0 cos R X sin R X   0 sin R X cos R X ]
R 2 = [ cos R Y 0 sin R Y   0 1 0   sin R Y 0 cos R Y ]
R 3 = [ cos R Z sin R Z 0   sin R Z cos R Z 0   0 0 1 ]
where (X, Y, Z, RX, RY, RZ) are the six parameters of the robot’s motion in the Cartesian coordinate system, which were read out by the trainer.
The pixel coordinates (c, r) of the feature points obtained from image processing and the real-time robot motion parameters (X, Y, Z, RX, RY, RZ) were brought into Equations (5) and (6) to obtain the dynamic 3D coordinates of the seam feature points in the robot base coordinate system.
Classical deviation control structures (e.g., PID, Fuzzy, Fuzzy-PID) could also be used to adjust the welding gun to the correct welding position after obtaining the real-time deviation of the welding gun through the image processing algorithm, but the traditional deviation control structures were more complex and suitable for lower welding speeds. For this reason, this research utilized the structured light vision sensor to scan the joint, obtained the three-dimensional coordinate data stream of center points in real-time during welding, and enabled the gun to track these actual welding points; this tracking welding control [27] method was not only simple in structure, but also had a fast response time.
Figure 5 shows the butt joint workpiece with a V-shaped seam. The 3D point cloud data in OBXBYBZB can be obtained with Equations (5) and (6) by scanning the groove, as shown in Figure 5b.

3. Tracking and Identification of the Seam Area

The CNN model for the typical V-shaped seam was established, as outlined in Figure 6. In order to improve the accuracy of locating the ROI from a strong noise image, this paper adopted the following flow:
  • Resized the original image to 224 × 224 from 2592 × 2048;
  • Multiple convolutions based on the kernel of size 3 × 3, after that, the output of the picture was changed to a one-dimensional vector;
  • Softmax layer completed the identification and localization of the target area through the processing of the full connection.
The sample set for the training in the study consisted of 4134 laser images with varying degrees of interference. The ROI object tracking test on 472 images from actual welding showed that the predictive validation accuracy of the model was about 97.0%, as presented in Figure 7. From laser stripe images, it was seen that metal soot and a large amount of splatter were present in the joint bevel area, especially at the corners and bottom of the joint, where there was a large amount of reflected light. It is worth noting that the light intensity of some splatters was even greater than that of the laser streak itself, and when they swept across the bottom of the bevel and the edges on both sides, the target tracking algorithm that directly acquires bevel feature points by locating tiny pixel areas was likely to extract target points with large errors.

4. Laser Stripe Feature Points Extraction

The points ed-l, ed-r, Te_l, Te_r, Te_b and Te_c were specified as the feature points of the V-shaped seam, among which Te_c was the seam center point as the reference point for the welding trajectory, as shown in Figure 8.

4.1. Laser Stripe Centerline Extraction

The gray barycentric algorithm [28] was utilized to quickly and efficiently extract the stripe centerline. The “centroid” in the pixel area was considered the pixel center in the area, and the calculation formula of the area centroid is expressed as follows:
{ c ¯ = ( c , r ) Ω c · f ( c , r ) ( c , r ) Ω f ( c , r ) r ¯ = ( c , r ) Ω r · f ( c , r ) ( c , r ) Ω f ( c , r )
where f ( c , r ) represents the gray value of the pixel point with coordinates (c, r), Ω denotes the set of target regions, and ( c ¯ , r ¯ ) is the gray barycentric coordinate of the region.
The skeleton of the laser stripes obtained by the algorithm scanning along the c-direction is equal to the set of the grayscale center of gravity points in each column region of the image. For the laser stripe image covered by a strong arc splash, the interference should be removed by the image preprocessing algorithm before the skeleton of the stripe was extracted by the gray barycentric method, as illustrated in Figure 9. It is seen in Figure 9d that the extracted centerline in the search area could accurately map the spatial coordinate information of the seam. Furthermore, with the column of the pixel as an independent variable and the row of the pixel as a dependent variable, there was a one-to-one mapping functional relationship between the independent variable and the dependent variable for the skeleton.

4.2. Seam Center Point Recognition

When the laser skeleton possessed discontinuous characteristics such as jumping points, holes and missing points (prone to occur in weak arc splash welding, as shown in Figure 2b,c) or fluctuating characteristics (prone to occur in strong arc splash welding, as shown in Figure 2d), traditional feature point extraction methods, such as inflection point method, slope analysis method and Hough straight line detection method, were likely to fail. In this study, an adaptive feature extraction algorithm based on row scanning and column scanning was adopted to solve this problem. The design idea of the adaptive feature extraction algorithm is shown in Figure 10.
The steps involved in the algorithm are as follows:
Step 1: Shield holes on the skeleton (as illustrated in Figure 11). After that, the horizontal coordinate set p _ c o l and vertical coordinate set p _ r o w to be processed can be obtained.
Step 2: Remove the jumping points. If the point ( p _ c o l [ a ] , p _ r o w [ a ] ) satisfies Equation (13), this point is treated as a jumping point and removed; otherwise, this point will be used as the comparison point of the next point to be determined. Through Equation (13), the new abscissa set p _ c and ordinate collections p _ r are obtained.
Δ p = | p _ r o w [ a ] p _ r o w [ b ] | Δ p > | ( p _ c o l [ a ] p _ c o l [ b ] ) k α / 2 ϑ | = p a
where k α / 2 and ϑ are the slope of the groove surface and correction factor, respectively; ( p _ c o l [ b ] , p _ r o w [ b ] ) is the nearest normal point in front of ( p _ c o l [ a ] , p _ r o w [ a ] ); a and b are integers, and a > b .
Step 3: Perform the least-squares fitting method on the first n1 points and the last n4 points in Ω respectively, and obtain the straight-line equations v = f 1 ( u ) and v = f 4 ( u ) .
Step 4: Utilizing row scanning, Δ v rows of data at the bottom of the skeleton are isolated to eliminate the influence of reflected light in the valley.
Step 5: Scanning from the point ( p _ c o l [ n 1 ] , p _ r o w [ n 1 ] ) to the valley bottom and from the valley bottom to the point ( p _ c o l [ n n 4 ] , p _ r o w [ n n 4 ] ), the least-squares fitting is performed on the points satisfying Equation (14) to obtain equations v = f 2 ( u ) and v = f 3 ( u ) .
| a p _ c [ s ] p _ r [ s ] + b | a 2 + 1 = Δ d > d a
where d a is set to 10; When calculating f 2 , a and b represent the slope and intercept of f1, respectively, and when calculating f 3 , a and b represent the slope and intercept of f 4 , respectively.
Step 6: Calculate the coordinate values of Te_c according to Equation (15).
{ u Te _ c = u Te _ l + u Te _ r 4 + u Te _ b 2 v Te _ c = v Te _ l + v Te _ r 2                            
where Te_l, Te_r and Te_b are obtained by intersecting the lines calculated above.
The laser image with a strong arc splash was taken as an example. It can be found from Figure 12 that the algorithm above has a high anti-interference ability to arc splash, and the extracted feature point Te_c could truly represent the actual location of the seam center. At the same time, even in the case of holes, bulges, concaves and jumping points in the laser centerline or in the case of fluctuations in the laser centerline, the algorithm also has a good extraction effect.

4.3. Improving Te_c Accuracy

Strong arc and strong splash made the laser stripe centerline deviate from the ideal position, but the slope and width of the stripe were almost unchanged in adjacent frames. Thence, the noise of the next frame image was effectively filtered according to the context information.
The determination of the slope and width of each stripe was defined in a prior algorithm, as Equation (16). Only the points that satisfied the algorithm were retained. The process of the prior model is illustrated in Figure 13.
S t r i p e   i : { w i d t h = [ y m i n         y m a x ]           θ       = [ θ m i n         θ m a x ] }
where i is the sequence number of the stripe image; w i d t h represents the width range of the stripe. θ represents the angle range of the stripe.
According to the stripe feature points of the previous frame, the current frame image was divided into four segments. Based on the centerline angle of the connected region in each segment and the stripe centerline equation of the previous frame, the main stripe information was separated from the background noise by the angle criterion and width criterion of the prior algorithm. The effect of improving the accuracy of Te_c (Figure 14) showed that the prior algorithm overcame the strong arc and splash interference, ensured that the stripe centerline was more consistent with the theoretical attitude and position (stripe centerline defect characteristics were removed) and further reduced the error of center point extraction at the pixel level.
In order to improve the stability and accuracy of the feature point extraction algorithm, in this study, the CNN model was used to locate and track the bevel region of interest in real-time, followed by the prior model to remove the image interference information, and finally, the adaptive feature extraction algorithm was able to extract the seam center quickly. Since the above algorithms were discussed under conventional welding conditions, it was necessary to perform tests under extreme environments to explore the adaptability of the algorithms to the welding environment.
An extreme welding environment should meet the following two conditions: (1) camera exposure values higher than normal (15,000 EV) and (2) thicker laser line (>1 mm). Under such welding conditions, tests a and b were performed, where in test a, the welding current was 100 A, the camera exposure value was 20,000 EV and the laser line was 1.5 mm; in test b, the welding current was 160 A, the camera exposure value was 40,000 EV and the laser line was 2.0 mm, as shown in Figure 15. As the welding current, exposure value and excitation line width increased, the noise (strong arc splash) in the laser streak image interfered more and more with the bevel laser lines; for example, some laser lines could no longer effectively characterize the bevel morphology (Figure 15(b2)). However, the pose and position of the stripe centerline extracted from the CNN model and prior model are more realistic representations of the 3D shape of the joint bevel (Figure 15(a4,b4)), while the adaptive feature extraction algorithm was able to extract the seam center more accurately based on the curve characteristics of the laser stripe centerline within the ROI. It can be concluded that the seam center extraction scheme proposed above was highly practical and adaptable for various complex welding environments.

5. Outliers Filtering

When tracking welding, under the action of real-time correction function, the coordinates of the seam center point in the image coordinate system OIXIYI were theoretically constant. However, due to the response period of correction motion lagging behind the extraction period of feature points, the image coordinate value of the seam center point will fluctuate around the actual center point with varying amplitude (i.e., tracking deviation). In the actual welding process, the coordinates of the seam center point extracted from some frame images are likely to deviate abnormally from the fluctuating center, which is caused by strong arc splash interference and ROI extraction error. Figure 16 shows the real-time tracking welding results based on the above algorithm. It can be seen that in the welding process, outliers inevitably appeared, and the probability of outliers occurring at the moment of arc occurrence was much higher than that of other welding periods. The pixel coordinates of the outlier deviated from the theoretical amplitude center within 10 pixels.
Using the singular seam center point as the tracking reference point and transmitting it to the motion control system caused the welding actuator to violently vibrate and even led to safety accidents. In fact, the working environment of the optical sensor was extremely demanding, and it was difficult for us to adopt a powerful algorithm to ensure that every center point extracted by the sensor was absolutely correct. However, the position of the high-frequency image collected by the sensor was before the welding point, and the time difference brought by this “advance” made it possible to remove outliers. So an embedded Pauta criterion will be proposed in this study. The Pauta criterion was written as follows:
| v i | > 2 σ = 2 i = 1 n ( x i x ¯ ) 2 n 1
where v i is the residual, and σ is the standard deviation.
As shown in Figure 17, it is the flowchart of the embedded Pauta criterion used for filtering outliers. A detailed description of the process is as follows:
(1)
The data stream of Te_c was formed by the ROI determination algorithm and tracking reference point acquisition algorithm.
(2)
Sequentially read 20 coordinate values from the data stream as sample data.
(3)
The sample data were repeatedly processed based on the Pauta criterion (Equation (17)) until there were no outliers in the sample.
(4)
Read the next sample data and repeat (2) and (3) to form the data stream of accurate tracking reference points.
Of 870 Te_c points from the image processing system, 41 outliers were removed (Figure 18). The Pauta criterion embedded into the real-time tracking welding system by piecewise processing eliminated the dangerous trajectory points caused by unavoidable dynamic interference and improved the robustness of tracking welding.

6. Experiment and Analysis

In order to verify the accuracy of the CNN model, the prior model, the adaptive feature extraction algorithm, and the embedded Pauta criterion proposed in this paper, in the actual operating environment, a series of real-time tracking welding experiments based on the welding system (Figure 3) were conducted. The butt-jointed specimen with a groove angle of 60° and a plate thickness of 5 mm was used as the experimental object (Figure 5). Three groups of real-time tracking welding experiments were conducted at 80, 120 and 160 A welding current parameters. The welding speed for each group of tests was 15 cm/min, the gas flow rate was 15 L/min and the wire feed speed and welding voltage were automatically matched by the arc welder, as shown in Figure 19, Figure 20, Figure 21 and Figure 22.
The tracking welding experiment was completed under the coordinated control of the YASKAWA robot and the structured light vision sensor. From the recognition trajectory obtained by the sensor through image processing and coordinate transformation (Figure 19), it can be seen that there are 5, 7 and 15 outliers in the recognition trajectory when the welding current is 80, 120 and 160 A, respectively. The number of these outliers, which significantly deviated from the central curve of the recognition trajectory increased with the increase of welding current. If these outliers were transmitted to the tracking control system, they certainly led to the reduction of welding accuracy and even caused safety accidents.
In this experiment, the processing time of a single image was about 0.08 s (the processing period of the CNN model was about 0.03 s, the processing period of the adaptive feature extraction algorithm was about 0.04 s, and the processing period of the prior algorithm was about 0.01 s) and the processing period of the embedded Pauta criterion for 20 coordinate points was about 0.005 s. In order to be able to ease the running capacity of the tracking software, the frame rate of the camera shot could not be too high and was set to 10 fps. Since the acquisition of the real coordinate data stream preceded the execution of the robot trajectory, the communication period of the robot was slightly higher than the period of coordinate point acquisition, which was about 12 Hz.
Figure 20 shows the relationship between the welding torch tracking trajectory and the seam centerline in the tracking system with the embedded Pautu algorithm. The tracking trajectory could be obtained by recording the 3D coordinates of the robot torch endpoint at high frequency during the welding process, while the seam centerline was obtained by fitting the seam center points by least squares, which were obtained by the sensor scanning the seam before welding. When the welding current was 80, 120 and 160 A, the deviation of the endpoint of the taught trajectory from the seam was 42, 28 and 75 mm respectively. The tracked trajectory obtained by the embedded Pauta criterion had a smooth curve characteristic and no more singularities inside. Three experiments showed that the tracked trajectory was basically consistent with the seam centerline, and the algorithms mentioned above can rectify the welding torch in real time to the actual seam center.
The real-time measurement error between tracked trajectory and the seam centerline is shown in Figure 21. The measured deviation oscillated near zero with a varying amplitude throughout the welding process. The graph shows that under the same algorithmic conditions, the welding current was the key influencing parameter for singularity generation (Figure 19), but its effect on the tracking error is relatively minor. This is due to the fact that under the effect of the embedded Pauta criterion, the welding torch runs smoothly, and the deviation is maintained in a stable range without singular deviation values. The experimental results show that the maximum errors of three tracking welding experiments were −0.25, −0.23 and −0.24 mm, respectively, and the average errors were 0.1, 0.11 and 0.09 mm, respectively, which proved the overall high robustness of the proposed algorithm. The result of real-time tracking welding is shown in Figure 22. The welding position was accurate and seam metal forming was good, thereby indicating the proposed seam tracking system had outstanding performance.

7. Conclusions

The laser stripe, which inevitably presents different types of defect features due to intense arc and splash interference, reduced the robustness of weld tracking. Therefore, the adaptive feature point extraction algorithm and the correction algorithm were proposed in this study. Specifically, the following conclusions were drawn:
(1)
Based on the principle of structured light vision, a real-time seam-tracking welding system was independently designed, which provided an important reference model for the industrial application of tracking welding technology.
(2)
With a recognition rate of 97.0%, the CNN model accurately obtained the target area of image processing in real time under the different intensities of arc splash.
(3)
The adaptive feature extraction algorithm based on row scanning and column scanning had strong anti-interference ability and adaptability to defects (such as holes, bulges, concaves, jumping points, etc.), welding arc and splashes in laser stripe, and accurately extracted the seam center from different types of laser stripe. The algorithm had good applicability for various types of laser stripes in real complex environments and provided a reliable solution for extracting feature points.
(4)
The prior algorithm accurately and quickly located the contour of the laser stripe centerline, effectively removed other image interference and noise and improved the adaptive feature extraction algorithm to higher accuracy at the pixel level.
(5)
Using the function of eliminating outliers with the embedded Pauta criterion, the structured light vision sensor, which captured high-frequency laser stripe images at the front of the welding torch, accurately obtained a smooth and gentle tracking trajectory, improved the stability of the system and avoided welding torch shaking and safety accidents. This embedded Pauta criterion provided a new idea for the safety mechanism of tracking welding, which guaranteed the stability of the system operation from the algorithmic mechanism.
(6)
The accuracy verification experiment demonstrated that the tracking error was mainly controlled within ±0.2 mm, and the average error was 0.1 mm. The results confirmed that the adaptive feature extraction algorithm and the outlier removal algorithm were sufficient to ensure the accurate and reliable robot’s real-time tracking welding and provide a stable welding regime. Our future research will focus on optimizing the matching relationship between the robot control cycle and the image processing cycle to the extent that a highly robust welding application can be achieved.

Author Contributions

Conceptualization, H.W.; Data curation, H.W.; Formal analysis, Y.L., B.Z. and S.H.; Investigation, Y.Z., Y.L., B.Z. and S.H.; Resources, X.Z. and H.W.; Writing—original draft, Y.Z.; Writing—review & editing, X.Z. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Nature Science Foundation of China under Grant No. 51875246, the Key Scientific and Technological Research and Development Projects of Jilin Provincial Science and Technology Department through Grant No. 20210201056GX, the “13th Five Year Plan” Science and Technology Research of the Education Department of Jilin Province through Grant No. JJKH20200954KJ.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Liu, F.; Wang, Z.; Ji, Y. Precise initial weld position identification of a fillet weld seam using laser vision technology. Int. J. Adv. Manuf. Technol. 2018, 99, 2059–2068. [Google Scholar] [CrossRef]
  2. Tan, Z.; Zhao, B.; Ji, Y.; Xu, X.; Kong, Z.; Liu, T.; Luo, M. A welding seam positioning method based on polarization 3D reconstruction and linear structured light imaging. Opt. Laser Technol. 2022, 151, 108046. [Google Scholar] [CrossRef]
  3. He, Y.; Xu, Y.; Chen, Y.; Chen, H.; Chen, S. Weld seam profile detection and feature point extraction for multi-pass route planning based on visual attention model. Robot. Comput. Manuf. 2016, 37, 251–261. [Google Scholar] [CrossRef]
  4. Fan, J.; Deng, S.; Jing, F.; Zhou, C.; Yang, L.; Long, T.; Tan, M. An Initial Point Alignment and Seam-Tracking System for Narrow Weld. IEEE Trans. Ind. Inform. 2019, 16, 877–886. [Google Scholar] [CrossRef]
  5. Zou, Y.; Li, J.; Chen, X. Seam tracking investigation via striped line laser sensor. Ind. Robot. Int. J. Robot. Res. Appl. 2017, 44, 609–617. [Google Scholar] [CrossRef]
  6. Zou, Y.; Chen, X.; Gong, G.; Li, J. A seam tracking system based on a laser vision sensor. Measurement 2018, 127, 489–500. [Google Scholar] [CrossRef]
  7. Zou, Y.; Zhu, M.; Chen, X. A Robust Detector for Automated Welding Seam Tracking System. J. Dyn. Syst. Meas. Control 2021, 143. [Google Scholar] [CrossRef]
  8. Zou, Y.; Chen, T. Laser vision seam tracking system based on image processing and continuous convolution operator tracker. Opt. Lasers Eng. 2018, 105, 141–149. [Google Scholar] [CrossRef]
  9. Li, X.; Li, X.; Ge, S.S.; Khyam, M.O.; Luo, C. Automatic Welding Seam Tracking and Identification. IEEE Trans. Ind. Electron. 2017, 64, 7261–7271. [Google Scholar] [CrossRef]
  10. Zou, Y.; Wang, Y.; Zhou, W.; Chen, X. Real-time seam tracking control system based on line laser visions. Opt. Laser Technol. 2018, 103, 182–192. [Google Scholar] [CrossRef]
  11. Zou, Y.; Chen, J.; Wei, X. Research on a real-time pose estimation method for a seam tracking system. Opt. Lasers Eng. 2019, 127, 105947. [Google Scholar] [CrossRef]
  12. Zhao, Z.; Luo, J.; Wang, Y.; Bai, L.; Han, J. Additive seam tracking technology based on laser vision. Int. J. Adv. Manuf. Technol. 2021, 116, 197–211. [Google Scholar] [CrossRef]
  13. Du, R.; Xu, Y.; Hou, Z.; Shu, J.; Chen, S. Strong noise image processing for vision-based seam tracking in robotic gas metal arc welding. Int. J. Adv. Manuf. Technol. 2018, 101, 2135–2149. [Google Scholar] [CrossRef]
  14. Zhang, G.; Yun, T.-J.; Oh, W.-B.; Lee, B.-R.; Kim, I.-S. A study on seam tracking in robotic GMA welding process. Mater. Today Proc. 2020, 22, 1771–1777. [Google Scholar] [CrossRef]
  15. Xue, B.; Chang, B.; Peng, G.; Gao, Y.; Tian, Z.; Du, D.; Wang, G. A Vision Based Detection Method for Narrow Butt Joints and a Robotic Seam Tracking System. Sensors 2019, 19, 1144. [Google Scholar] [CrossRef] [Green Version]
  16. Wu, Q.-Q.; Lee, J.-P.; Park, M.-H.; Jin, B.-J.; Kim, D.-H.; Park, C.-K.; Kim, I.-S. A study on the modified Hough algorithm for image processing in weld seam tracking. J. Mech. Sci. Technol. 2015, 29, 4859–4865. [Google Scholar] [CrossRef]
  17. Muhammad, J.; Altun, H.; Abo-Serie, E. A robust butt welding seam finding technique for intelligent robotic welding system using active laser vision. Int. J. Adv. Manuf. Technol. 2016, 94, 13–29. [Google Scholar] [CrossRef]
  18. Fan, J.; Jing, F.; Yang, L.; Long, T.; Tan, M. A precise seam tracking method for narrow butt seams based on structured light vision sensor. Opt. Laser Technol. 2018, 109, 616–626. [Google Scholar] [CrossRef]
  19. Xiao, R.; Xu, Y.; Hou, Z.; Chen, C.; Chen, S. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sens. Actuators A: Phys. 2019, 297. [Google Scholar] [CrossRef]
  20. Zhang, B.; Shi, Y.; Gu, S. Narrow-seam identification and deviation detection in keyhole deep-penetration TIG welding. Int. J. Adv. Manuf. Technol. 2018, 101, 2051–2064. [Google Scholar] [CrossRef]
  21. Banafian, N.; Fesharakifard, R.; Menhaj, M.B. Precise seam tracking in robotic welding by an improved image processing approach. Int. J. Adv. Manuf. Technol. 2021, 114, 251–270. [Google Scholar] [CrossRef]
  22. Zou, Y.; Chen, T.; Chen, X.; Li, J. Robotic seam tracking system combining convolution filter and deep reinforcement learning. Mech. Syst. Signal Process. 2021, 165, 108372. [Google Scholar] [CrossRef]
  23. Shen, C.; Bao, X.; Tan, J.; Liu, S.; Liu, Z. Two noise-robust axial scanning multi-image phase retrieval algorithms based on Pauta criterion and smoothness constraint. Opt. Express 2017, 25, 16235–16249. [Google Scholar] [CrossRef] [PubMed]
  24. Wu, D.; Chen, T.; Li, A. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System. Sensors 2016, 16, 1388. [Google Scholar] [CrossRef]
  25. Lenz, R.; Fritsch, D. Accuracy of videometry with CCD sensors. ISPRS J. Photogramm. Remote Sens. 1990, 45, 90–110. [Google Scholar] [CrossRef]
  26. Zhu, C.; Zhu, Z.; Ke, Z.; Zhang, T. Internal Parameters Calibration of Vision Sensor and Application of High Precision Integrated Detection in Intelligent Welding Based on Plane Fitting. Sensors 2022, 22, 2117. [Google Scholar] [CrossRef]
  27. Kos, M.; Arko, E.; Kosler, H.; Jezeršek, M. Remote-laser welding system with in-line adaptive 3D seam tracking and power control. Procedia CIRP 2019, 81, 1189–1194. [Google Scholar] [CrossRef]
  28. Zhao, C.; Yang, J.; Zhou, F.; Sun, J.; Li, X.; Xie, W. A Robust Laser Stripe Extraction Method for Structured-Light Vision Sensing. Sensors 2020, 20, 4544. [Google Scholar] [CrossRef]
Figure 1. Connected region detection results of V-shaped joint. (a) Connected area extraction results for the narrow laser line. (b) Connected area extraction results for the wide laser line.
Figure 1. Connected region detection results of V-shaped joint. (a) Connected area extraction results for the narrow laser line. (b) Connected area extraction results for the wide laser line.
Sensors 22 08546 g001
Figure 2. Four typical types of laser stripes under practical operating conditions. (a) The theoretical laser stripe. (b) The laser stripe with jumping points. (c) The laser stripe with missing feature points. (d) The fluctuating laser stripe.
Figure 2. Four typical types of laser stripes under practical operating conditions. (a) The theoretical laser stripe. (b) The laser stripe with jumping points. (c) The laser stripe with missing feature points. (d) The fluctuating laser stripe.
Sensors 22 08546 g002
Figure 3. Diagram of a structured light vision welding system.
Figure 3. Diagram of a structured light vision welding system.
Sensors 22 08546 g003
Figure 4. Schematic of robot automatic welding.
Figure 4. Schematic of robot automatic welding.
Sensors 22 08546 g004
Figure 5. Butt joint workpiece with a V-shaped seam. (a) The butt joint workpiece. (b) The 3D point cloud data of V-shaped seam.
Figure 5. Butt joint workpiece with a V-shaped seam. (a) The butt joint workpiece. (b) The 3D point cloud data of V-shaped seam.
Sensors 22 08546 g005
Figure 6. Schematic diagram of the CNN model.
Figure 6. Schematic diagram of the CNN model.
Sensors 22 08546 g006
Figure 7. Target area identifying results.
Figure 7. Target area identifying results.
Sensors 22 08546 g007
Figure 8. The feature points of the V-shaped seam.
Figure 8. The feature points of the V-shaped seam.
Sensors 22 08546 g008
Figure 9. The extraction process of stripe centerline. (a) Initial image. (b) Otsu threshold segmentation after Median filtering. (c) Morphological processing. (d) Extracting stripe centerline.
Figure 9. The extraction process of stripe centerline. (a) Initial image. (b) Otsu threshold segmentation after Median filtering. (c) Morphological processing. (d) Extracting stripe centerline.
Sensors 22 08546 g009
Figure 10. The design idea of adaptive feature extraction algorithm.
Figure 10. The design idea of adaptive feature extraction algorithm.
Sensors 22 08546 g010
Figure 11. The process of shielding holes.
Figure 11. The process of shielding holes.
Sensors 22 08546 g011
Figure 12. The extraction results of seam feature points. (a1a4) Positions of feature points on the centerline. (b1b4) Mapping positions of feature points and centerlines in (a1a4) on initial images.
Figure 12. The extraction results of seam feature points. (a1a4) Positions of feature points on the centerline. (b1b4) Mapping positions of feature points and centerlines in (a1a4) on initial images.
Sensors 22 08546 g012
Figure 13. The prior model.
Figure 13. The prior model.
Sensors 22 08546 g013
Figure 14. The effect of improving the accuracy of the Te_c.
Figure 14. The effect of improving the accuracy of the Te_c.
Sensors 22 08546 g014
Figure 15. Test results under extreme environments. (a1,b1) Original image. (a2,b2) ROI. (a3,b3) The priori model processing effect. (a4,b4) The feature point extraction effect.
Figure 15. Test results under extreme environments. (a1,b1) Original image. (a2,b2) ROI. (a3,b3) The priori model processing effect. (a4,b4) The feature point extraction effect.
Sensors 22 08546 g015
Figure 16. Extraction results of Te_c during tracking welding.
Figure 16. Extraction results of Te_c during tracking welding.
Sensors 22 08546 g016
Figure 17. The flowchart of the embedded Pauta criterion.
Figure 17. The flowchart of the embedded Pauta criterion.
Sensors 22 08546 g017
Figure 18. Piecewise processing results of embedded Pauta criterion.
Figure 18. Piecewise processing results of embedded Pauta criterion.
Sensors 22 08546 g018
Figure 19. The recognition trajectory before Pauta criterion processing.
Figure 19. The recognition trajectory before Pauta criterion processing.
Sensors 22 08546 g019
Figure 20. The welding trajectory under real-time tracking.
Figure 20. The welding trajectory under real-time tracking.
Sensors 22 08546 g020
Figure 21. The real-time tracking accuracy.
Figure 21. The real-time tracking accuracy.
Sensors 22 08546 g021
Figure 22. Seam tracking welding results.
Figure 22. Seam tracking welding results.
Sensors 22 08546 g022
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, X.; Zhang, Y.; Wang, H.; Liu, Y.; Zhang, B.; Hu, S. Research on Trajectory Recognition and Control Technology of Real-Time Tracking Welding. Sensors 2022, 22, 8546. https://doi.org/10.3390/s22218546

AMA Style

Zhao X, Zhang Y, Wang H, Liu Y, Zhang B, Hu S. Research on Trajectory Recognition and Control Technology of Real-Time Tracking Welding. Sensors. 2022; 22(21):8546. https://doi.org/10.3390/s22218546

Chicago/Turabian Style

Zhao, Xiaohui, Yaowen Zhang, Hao Wang, Yu Liu, Bao Zhang, and Shaoyang Hu. 2022. "Research on Trajectory Recognition and Control Technology of Real-Time Tracking Welding" Sensors 22, no. 21: 8546. https://doi.org/10.3390/s22218546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop