Next Article in Journal
Sulfur-Doped Graphene-Based Electrochemical Sensors for Fast and Sensitive Determination of (R)-(+)-Limonene from Beverages
Next Article in Special Issue
Application of Data Particle Geometrical Divide Algorithms in the Process of Radar Signal Recognition
Previous Article in Journal
MaD GUI: An Open-Source Python Package for Annotation and Analysis of Time-Series Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved RANSAC Point Cloud Spherical Target Detection and Parameter Estimation Method Based on Principal Curvature Constraint

Hubei Key Laboratory of Modern Manufacturing Quantity Engineering, School of Mechanical Engineering, Hubei University of Technology, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5850; https://doi.org/10.3390/s22155850
Submission received: 8 July 2022 / Revised: 1 August 2022 / Accepted: 3 August 2022 / Published: 5 August 2022
(This article belongs to the Special Issue Recent Advances in Information Geometric Signal Processing)

Abstract

:
Spherical targets are widely used in coordinate unification of large-scale combined measurements. Through its central coordinates, scanned point cloud data from different locations can be converted into a unified coordinate reference system. However, point cloud sphere detection has the disadvantages of errors and slow detection time. For this reason, a novel method of spherical object detection and parameter estimation based on an improved random sample consensus (RANSAC) algorithm is proposed. The method is based on the RANSAC algorithm. Firstly, the principal curvature of point cloud data is calculated. Combined with the k-d nearest neighbor search algorithm, the principal curvature constraint of random sampling points is implemented to improve the quality of sample points selected by RANSAC and increase the detection speed. Secondly, the RANSAC method is combined with the total least squares method. The total least squares method is used to estimate the inner point set of spherical objects obtained by the RANSAC algorithm. The experimental results demonstrate that the method outperforms the conventional RANSAC algorithm in terms of accuracy and detection speed in estimating sphere parameters.

1. Introduction

During the maintenance of aircraft landing gear, it is necessary to detect assembly errors, such as parallelism of the front and rear axles of the frame and axiality between the shock absorber prop and sleeve. Because of the sizeable measurement space range and the complex structure of measured parts, they need to be measured by a combined measurement method. Combined measurement is a method to select a measurement method with large-scale precision characteristics to achieve global measurement and control and to select a precise and an efficient morphology measurement method as the terminal measurement to collect high-density point clouds [1,2]. Its key principle is to transform the terminal scan data from different locations into a unified global coordinate reference system with the help of auxiliary targets [3]. Three-dimensional structured light scanning technology is often used as the terminal measurement device in combined measurements due to its good collimation, non-contact mode, high accuracy, and fast measurement [4,5].
Because the sphere is used as the target of detection, any error of the sphere center that it determines will be propagated as an error in the coordinate system transformation. In order to improve the efficiency and accuracy of coordinate system unification during measurement, many studies have been devoted to improving the sphere detection accuracy and shortening the calculation time. Three-dimensional point cloud sphere detection includes the 3D Hough transform [6], registration [7], random sample consensus (RANSAC) [8], region growing [9], and other methods. For example, Schnabel et al. [10] proposed the RANSAC method for 3D sphere detection. This method can effectively detect spherical regions in a point cloud. It has high detection accuracy and can suppress the effect of noise. However, the traditional RANSAC has high computational complexity and relatively high time consumption [11]. Other researchers [12,13] have put forward some improved RANSAC methods. These improved RANSAC algorithms can effectively segment targets, but they do not delve into precise target fitting. Camurri et al. [14] applied the 3D Hough transform of three-dimensional space for sphere identification and the detection of point cloud data. It can calculate sphere parameter values from point clouds containing noise and suppress noise points. However, in practical application, the 3D Hough transform is more complicated in three-dimensional space, with large computation and high time consumption. Abizaina et al. [15] proposed a method based on the 3D Hough transform to detect spherical point clouds in 3D point cloud data generated by the Kinect sensor, and it has higher computational efficiency than traditional algorithms. Other researchers [16,17] also proposed methods to improve the sphere detection efficiency and robustness of 3D Hough detection. These methods based on the 3D Hough transform can effectively detect the sphere in the point cloud and suppress noise. However, their detection accuracy is based on the resolution of discrete space, so the calculation cost will increase sharply with the improvement of accuracy. In many improved methods, the sphere’s radius in the point cloud needs to be determined in advance, and the spherical point cloud must reach the hemisphere to be feasible. Anh-vu et al. [18] applied the region growing method for sphere detection, selected seed points to expand according to the criteria of surface membership, and divided point clouds into different surfaces through iteration. Nurunnabi et al. [19] improved the region growing method. This method sets the selection region of seed points to the area near the artificial filter edge and selects the points with the least curvature in the region as seed points. The method can detect the spherical model in the point cloud quickly, but the accuracy of detection depends mainly on the pre-selected seed points. When the scene contains many noise point clouds, it is challenging to select seed points, and the detection accuracy is low. Wang et al. [20] proposed an automatic method of laser point cloud registration based on spherical target detection, which can quickly detect spherical targets in the point cloud. Huang et al. [21] proposed a registration method based on an auxiliary spherical target combined with the ICP algorithm, and the spherical target is used to provide spherical constraints for registration. These registration methods can quickly detect spherical objects after scene segmentation, but they lack a noise suppression mechanism after scene segmentation with noise.
When 3D structured light scanning is used as a large-scale combined measurement terminal, the scanned point cloud data contain the spherical point cloud below but close to the hemisphere, environmental point, outlier point and regular point clouds of the measured object. That is, non-spherical point clouds account for a relatively high proportion of point clouds. The RANSAC algorithm can suppress the influence of point cloud noise on spherical feature recognition, and it has high robustness and efficiency. Therefore, it is suitable for large-scale measurement coordinate unification scenarios.
However, due to the random selection of sample points in the detection process of traditional RANSAC, the model established by the random selection of sample points in the presence of a lot of noise point clouds is often an error model, and the error model can only be determined after calculating the inner point set every time, which wastes a lot of time. To address this problem, Shuyang Shang et al. [22] proposed an improved RANSAC point cloud sphere detection method based on irrelevant point markers. Triangulation and statistics are used to mark irrelevant points of spherical shapes according to point cloud spacing, which cannot be regarded as sample points, thus improving the iteration efficiency. However, this method is suitable for scenes where noise only disperses the clutter. In addition to discrete outliers, there are also measured points with regular point cloud n large-scale combined measurement point cloud data, so this method is unstable for detection. There is another problem with the traditional RANSAC. The algorithm uses the least squares method to correct the parameters of the spherical model, which considers the coefficient matrix to be constant. There are errors in point cloud data, and the coefficient matrix is also affected by errors [23]. Therefore, a better-fitting result cannot be obtained when the threshold value is large.
This paper proposes an improved RANSAC method based on principal curvature constraint (PC-RANSAC) point cloud spherical target detection and parameter estimation to solve the above problems. It mainly improves the traditional RANSAC algorithm by first adding a constraint condition based on the principal curvature of the point cloud. Four points of random nearest neighbors are searched according to the k-d nearest neighbor algorithm and then constrained according to their principal curvatures to improve the sample quality. Secondly, the traditional RANSAC algorithm only considers the error in the Z direction but ignores the errors in the X and Y directions. This improved method uses the total least squares method to optimize the estimation of spherical target parameters.

2. Systems and Methods

2.1. System Design

As shown in Figure 1, the hardware of the combined measurement system includes a flexible joint arm, a 3D scanner, a manipulator, and two sets of spherical targets. The spherical target consists of multiple (at least three) ceramic standard spheres fixed in relative positions. The flexible joint arm is the global measurement coordinate system in the combined measurement system. The manipulator drives the 3D scanner as the measurement terminal to obtain the point cloud on the measured object’s surface. The two groups of spherical targets are placed in measurement areas 1 and 2 of the measured object, respectively. The position of the spherical target in the global measurement coordinate system can be measured by the flexible joint arm. During the measurement, the 3D scanner at the end of the manipulator is controlled to scan the two measurement areas. The measured element point clouds and target point clouds in the two regions are each obtained. In this way, the coordinates of the target in the global measurement coordinate system and the coordinates of the 3D scanner measurement terminal are known. Then, the point cloud data of the measured elements in the region where the spherical target is located can be converted to the global measurement coordinate system. Finally, coordinate unification of the measured elements is achieved. The key factors affecting the measurement accuracy of the system are the detection of spherical targets and the estimation of parameters. This paper elaborates on this.

2.2. PC-RANSAC Point Cloud Sphere Detection Algorithm

The process based on the fast, robust PC-RANSAC point cloud sphere detection algorithm is shown in Figure 2. It is divided into two parts: sphere detection and sphere parameter estimation. In spherical target detection, four random neighboring points are searched as the current sample points by the K-D nearest neighbor search algorithm. Point selection is constrained by the principal curvature of the sample point. If the condition is not satisfied, a sample point is selected again to improve the quality of the sample point. In sphere parameter estimation, it considers point cloud data coefficient matrix errors and the observation vector. The total least squares algorithm optimizes the estimation of the spherical target center coordinates.

2.2.1. Principal curvature constraint for sample point selection

At any point P on a continuous smooth surface S, there exist an infinite number of regular curves passing through that point that have different normal curvatures at point P [24,25]. The two maximum and minimum extreme values K 1 and K 2 are called the principal curvature of the point. If all points of continuous surface S satisfy K 1 = K 2 0 , surface S must belong to the same sphere, and the normal curvature of all points is equal to the sphere’s radius [26,27]. If adjacent points with the same principal curvature are found, the surface constructed by these field points is the sphere in the point cloud. Let its normal unit vector at point P be N for each point P in the point cloud. The method to estimate the normal curvature at point P using point coordinates and normal phase vectors is as follows.
Let us assume that there are m neighboring points near P, q i   is the ith nearest neighbor point of point P, and the normal vector of q i   is M i . Let the orthogonal coordinate system ( X , Y , Z ) be the local coordinate system L with point P as the origin, and let N denote the normal vector of point P. In L, X and Y are orthogonal unit vectors, and the coordinates of P are   ( x , y , z ) , those of q i are ( x i , y i , z i ) , and those of M i   are ( n x i , n y i , n z i ) . Let point P have a normal vector N = ( n x p , n y p , n z p ) . Suppose that the three axes X, Y, Z are X = ( sin ϕ , cos ϕ , 0 ) , Y = ( cos φ cos ϕ , cos φ sin ϕ , sin φ ) , and Z = N = ( n x p , n y p , n z p ) , where ϕ = arctan ( n x p / n x p ) ϕ = a r c cos ( n z p ) .
Then, the normal curvature k n i of point P can be estimated by constructing approximate triangles using each point P and its normal vector, the nearest neighbor, and the normal vector of the nearest neighbor. Figure 3 shows the geometric relationships of these variables.
Then, the normal curvature of P with respect to q i is estimated as follows:
k n i = sin β | p q i | sin α
where α is the angle between vectors N and p q i , β is the angle between vectors N and M i , and k n i denotes the normal curvature of the normal intercept line corresponding to the ith nearest neighbor point.
The following equation gives the approximation of Equation (1):
k n i = n x y n x y 2 + n z i 2 . x i 2 + y i 2
where n x y = x i · n x i + y i · n y i x i 2 + y i 2 .
As shown in Figure 3, let e 1 and e 2 be the principal directions corresponding to the principal curvature of point P. Supposed that θ is the angle between the X-axis in the local coordinate system and the principal direction corresponding to the maximum principal curvature e 1 , Q i is the projection point of the nearest neighbor q i on the tangent plane S, and θ i is the angle between vector P θ i and the X-axis in the local coordinate system.
According to the Euler Equation, the normal curvature and the principal curvature are related as follows:
k n i = k 1 c o s ( θ i + θ ) + k 2 s i n 2 ( θ i + θ ) , ( i = 1 , 2 ,   , m )
The task can be written as an optimization problem:
min k 1 , k 2 , q i = 0 m [ k 1 c o s 2 ( θ i + θ ) + k 2 s i n 2 ( θ i + θ ) k n i ] 2
Equation (4) can be translated into the following least squares problem.
min μ | | M m , 3 μ 3 , 1 r m , 1 | | 2
In Equation (5): M m × 3 = [ cos 2 θ 1 2 cos θ 1 sin θ 1 sin 2 θ 1 cos 2 θ i 2 cos θ i sin θ i sin 2 θ i cos 2 θ m 2 cos θ m sin θ m sin 2 θ m ] ; r m × 1 = [ k n 1 k n i k n m ] ; μ = ( A , B , C ) T ; A = k 1 cos 2 θ + k 2 sin 2 θ ; B = ( k 2 - k 1 ) cos θ sin θ ; C = k 1 sin 2 θ + k 2 cos 2 θ .
After the least squares fit of Equation (5), estimates of μ can be obtained accordingly, and using the derived values of A, B, and C, the Weingarten matrix can be inferred as follows:
W = [ A B B C ] = [ k 1 cos 2 θ + k 2 sin 2 θ ( k 2 k 1 ) cos θ sin θ ( k 2 k 1 ) cos θ sin θ k 1 sin 2 θ + k 2 cos 2 θ ] = [ cos θ sin θ sin θ cos θ ] [ k 1 0 0 k 2 ] [ cos θ sin θ sin θ cos θ ]
It can be concluded that the principal curvature points k 1 and k 2 are the eigenvalues of the matrix W.
From the above, it can be seen that the points satisfying k 1 = k 2 0 are identified as spherical points, but due to the measurement error of the point cloud, the spherical spacing error, and the existence of local fitting error, the derived principal curvature value will deviate from the actual value. The values of k 1 and k 2 are rarely guaranteed to be exactly equal. Therefore, when the absolute value of their difference Δ does not exceed some limit ξ , the point can be recognized as a spherical point. However, differences in the radii of different target spheres will lead to differences in the curvatures of the respective spherical points. In the case that the radii of target spheres in the scene are unknown and different, it is not suitable to choose a constant value of ξ .
This paper is designed to identify spherical points by the relative deviation of the principal curvature Δ = Δ H , where H is the mean curvature, H = k 1 + k 2 2 , and Δ = | k 1 - k 2 | . A point in the point cloud is identified as a spherical point when it satisfies the following two conditions.
Condition 1: Δ < α , where α is the critical value, which is set to 0.4 in the experiments described in this paper.
Condition 2: k 1 = k 2 > 0 , and it is possible to exclude plane points and hyperbolic points.

2.2.2. Total Least Squares Algorithm-Corrected Sphere Parameters

The set of interior points corresponding to the optimal model parameters M* is obtained: ( x i , y i , z i ) , i = 1 , 2 , , n , and the spatial spherical equation is established as:
( x a 0 ) 2 + ( y b 0 ) 2 + ( z c 0 ) 2 = r 2
where a 0 , b 0 , and c 0 are the coordinates of the center of the sphere, and r is the radius of the sphere. Considering that there are errors in the three directions x, y, and z, let v x , v y , and v z be the error correction numbers in the three directions x, y, z, respectively. We can rewrite the spherical equation as follows:
( x v x ) 2 + ( y v y ) 2 + ( z v z ) 2 = 2 ( x v x ) a 0 + 2 ( y v y ) b 0 + 2 ( z v z ) c 0 + r 2 a 0 2 b 0 2 c 0 2
After rearranging Equation (7), the following equation is obtained.
y n , 1 + e y = ( A n , m + E A n , m ) X m , 1
E A and e y in Equation (8) denote the errors of the coefficient matrix A and the observation matrix y, respectively.where y n × 1 = [ x 1 2 + y 1 2 + z 1 2 x 2 2 + y 2 2 + z 2 2 x n 2 + y n 2 + z n 2 ] ; A n × 4 = [ 2 x 1 2 y 1 2 z 1 1 2 x 2 2 y 2 2 z 2 1 2 x n 2 y n 2 z n 1 ] ; e n × 1 = [ v x 1 2 + v y 1 2 + v z 1 2 v x 2 2 + v y 2 2 + v z 2 2 v x n 2 + v y n 2 + v z n 2 ] ;
E A n × 4 = [ 2 v X 1 2 v y 1 2 v z 1 1 2 v X 2 2 v y 2 2 v z 2 1 2 v X n 2 v y n 2 v z n 1 ] ; X 4 × 1 = [ a 0 b 0 c 0 r 2 a 0 2 b 0 2 c 0 2 ] .
It is generally solved by the singular value decomposition of the matrix. Firstly, singular value decomposition of the augmented matrix [ A y ] is performed:
[ A y ] = U Σ V T
The resulting model parameter estimation is obtained as:
X = 1 v n + 1 , n + 1 [ v 1 , n + 1 , , v n , n + 1 ] T
According to Equation (10), the spherical target parameter X = [ a 0 b 0 c 0 r 2 a 0 2 b 0 2 c 0 2 ] T is the optimal target estimate.
The PC-RANSAC point cloud sphere detection algorithm exploits the properties of the differential geometry of spherical point clouds. It uses point cloud principal curvature constraints to improve the quality of RANSAC sample points. The current optimal set of interior points obtained at the termination of the iteration is also corrected by the total least squares algorithm. This method reduces the influence of point cloud data coefficient matrix and observation vector errors on the fitted spherical surface results. The following experiments illustrate the effectiveness and practicality of the method.

3. Experimental Results and Analysis

The experimental operating environment was an Intel Core 2.4 GHz CPU and MATLAB 2020 platform. Numerous experiments with synthetic and real data were performed to validate the proposed method. Efficiency and accuracy comparisons and large-scale measurement experiments were carried out. The detection efficiency and accuracy comparisons were divided into a simulation data experiment and an actual acquisition standard ball experiment.

3.1. Detection Efficiency and Accuracy Verification Experiments

3.1.1. Simulation Experiments

Because the spherical target captured by the surface structured light camera is usually below but close to the hemisphere, a standard hemispherical surface point cloud of 3000 points was generated by MATLAB. The coordinates of the center of the sphere were (20, 30, 40), and the radius of the sphere was 15. The noise in the point cloud was randomly generated, and the ratio of the number of spherical model point clouds to the number of noise points was set as W. Point clouds with W = 10% to W = 40% (with an interval of 10%) were generated respectively, which are called noisy spherical point clouds. The noise-laden spherical point cloud is shown in Figure 4.
Spherical point detection was performed on the simulated point cloud data by the RANSAC algorithm, 3D Hough algorithm, and PC-RANSAC algorithm. The experimental results are given as the mean and standard deviation of the sphere center coordinates and sphere radius after repeating 20 independent experiments. In this paper, the standard deviations of the detected center coordinates and radius of the spherical model from the actual values are used to represent the accuracy of the algorithm. The standard deviation is a measurement concept of the degree of dispersion of a set of values from the mean value. The standard deviation represents the magnitude of the calculated accuracy. The standard deviation calculation formula is shown in Equation (11), and the experimental results are shown in Table 1 and Figure 5.
s = ( x ¯ x ) 2 + ( y ¯ y ) 2 + ( z ¯ z ) 2 + ( r ¯ r ) 2 4
where ( x ¯ , y ¯ , z ¯ ) and r ¯ are the estimated sphere center coordinates and radius of the spherical surface, respectively.
Table 1 shows that the detection time of the three algorithms starts to increase as the proportion of noise in the point cloud increases. The detection time of the 3D Hough algorithm increases more, the RANSAC algorithm is better than the 3D Hough algorithm in terms of speed, and the detection time of the PC-RANSAC algorithm is always less than that of the traditional RANSAC algorithm. Figure 5 shows that when the noise is low, the accuracy of the three methods in detecting the spherical model is not much different. However, as the noise increases, the accuracy of the RANSAC algorithm becomes better than that of 3D Hough. The accuracy of the RANSAC algorithm is better than that of the 3D Hough algorithm in detecting spherical targets in point clouds containing noise, and the results of the PC-RANSAC algorithm are closer to the real values and are always much better than those of the RANSAC algorithm and 3D Hough algorithm. The experimental results show that the PC-RANSAC algorithm can improve the efficiency and accuracy of the traditional RANSAC algorithm for detecting spherical surfaces with better robustness and accuracy.

3.1.2. Standard Ball Experiment

In this experiment, the point cloud data of standard ball and shaft parts were obtained by a XUNHENG 3D scanner, as shown in Figure 6a. The scanner performs photographic smooth surface scanning of the structure, and its adaptive range is from 40 mm to 2000 mm. Its scanning range is 200 mm, and its accuracy is 0.04 mm. In the original point cloud data, there are two ceramic standard balls and a shaft part. The diameter of the two standard balls is 30 mm, and the center distance of the two balls is 60 mm, as shown in Figure 6b. The original number of point clouds was 1 million, and after voxel downsampling, the number dropped to 82,340. The standard balls were measured using a global classic SR 05-07-05 CMM from HEXAGON. The diameters of the two standard balls were measured to be 30.004 mm and 30.005 mm, respectively, and the center distance of the two standard balls was 60.010 mm. Since the above experiment confirmed that the RANSAC algorithm outperforms the 3D Hough algorithm in detecting spherical surfaces in point clouds containing noise, a comparison experiment was conducted using RANSAC and PC-RANSAC to detect spherical surfaces.
To verify the robustness of this algorithm, a more relaxed threshold was chosen. The number of iterations in the experiment was set at k = 20,000, and the distance threshold was chosen to be 0.4. The experimental results were obtained after repeating 20 independent experiments to detect the spherical surface, as shown in Table 2; the sphere center distance between the two spherical surfaces, as shown in Table 3; and the average radius difference between the spherical surfaces, as shown in Figure 7. The total time required to detect the spherical surface is shown in Table 4.
As shown in Figure 7 and Table 3 and Table 4, the PC-RANSAC algorithm detected the average sphere center distance of the standard ball as 59.990 mm with a detection time of 4.62 s. The RANSAC algorithm detected the average sphere center distance of the standard ball as 59.587 mm with a detection time of 8.58 s. The experimental results show that the PC-RANSAC algorithm can effectively detect the spherical surface in the point cloud model, and the detection accuracy and speed are improved compared to the traditional RANSAC. The experimental results fully demonstrate the effectiveness and feasibility of the method.

3.2. Large-Scale Measurement Experiment

In order to verify the practicality of the proposed algorithm in large-scale coordinate conversion, the experiment used a spherical target and a robotic arm-driven 3D scanner as the measurement tool and a standard rod as the measurement object. A standard sphere fixed the standard rod fitting at both ends of the aluminum profile, and the sphere center distance measured by the articulated arm was used as the standard value. The sphere center distance of the two ends of the standard rod was obtained as 861.890 ± 0.008 mm, and the two sphere diameters were 30.007 ± 0.025 mm and 29.990 ± 0.025 mm, respectively.
The experimental procedure was as follows: (1) The positions of the two sets of targets were calibrated with the articulated arm before the measurement; the target positions are shown in the boxed parts in Figure 8a, and their measurement data are shown in Table 5.
(2) The standard rod fitting was placed on the horizontal surface of the adjacent target and fixed. Standard rod sphere 1 and target 1 and standard rod sphere 2 and target 2 were each scanned by operating the 3D scanner driven by the robotic arm, as shown in the square boxed area in Figure 8b. The improved RANSAC algorithm was used to automatically detect the target sphere point cloud and standard rod sphere point cloud in the point cloud data and fit them to obtain each sphere parameter. The measured data are shown in Table 6.
(3) For coordinate unification of point cloud data using the coordinates of the same spherical target in different measurement coordinate systems, the RT matrices of targets 1 and 2 converted to the joint arm coordinate system were obtained.
The RT matrix of target 1: R 1 = [ 0.929 0.331 0.166 0.180 0.013 0.984 0.323 0.944 0.072 ] , T 1 = [ 65.718 431.829 548.694 ] .
The RT matrix of target 2: R 2 = [ 0.922 0.349 0.167 0.181 0.007 0.984 0.342 0.937 0.069 ] , T 2 = [ 328.124 431.778 259.960 ] .
(4) The rotation translation matrix of the point cloud data of targets 1 and 2 derived from step (3) converted and unified the two parts of the point cloud data into the joint arm coordinate system. The unified point cloud data are shown in Table 7, and the unified effect is shown in Figure 9.
(5) As shown in Table 7, the spherical surface was detected by the improved RANSAC algorithm, and the length of the sphere distance between the two ends of the standard rod obtained by coordinate unification is 861.865 mm. The experiment was repeated ten times with the mobile robot, and the measured value of the joint contact arm was used as the real value of the sphere distance. The single measurement result was compared with the real value, and the experimental results and data are shown in Table 8.
After the point cloud data coordinates were unified, the sphere distance data between the two ends of the standard rod were obtained, as shown in Table 8. The maximum deviation was 0.097 mm, the minimum deviation was 0.018 mm, and the average was 861.876 mm. The standard deviation was 0.0481 mm, and the standard uncertainty of the measurement mean was 0. 01521 mm. With degrees of freedom v = 9 and confidence probability p = 95%, querying the t-distribution table yielded k = 2.821, so the mean extended uncertainty U = 0.045 mm. Then, the measurement result of the ball center distance after coordinate unification was 861.876 ± 0.045 mm. The results show the practicality of the proposed algorithm in large-scale coordinate unification.
According to the above experiments, the PC-RANSAC algorithm is better than the traditional algorithm and 3D Hough algorithm in terms of detection accuracy and speed. The PC-RANSAC algorithm uses the point cloud principal curvature to constrain the selection of sample points. The efficiency of the traditional RANSAC algorithm is improved, so the algorithm running time can be shortened. The PC-RANSAC algorithm also uses the total least squares algorithm to optimize the interior point set of the current optimal sphere obtained by fitting. It can reduce the influence of the coefficient matrix and observation vector error of the point cloud data on the sphere fitting result so that it can improve the accuracy of fitting sphere parameters.

4. Conclusions

This paper proposes a novel method to automatically detect sphere targets in point clouds and improve the accuracy of estimating sphere parameters. The main contributions and novelty of this paper are as follows:
(1) We propose an improved RANSAC point cloud spherical target detection and parameter estimation method based on principal curvature constraint. The method applies to the automatic extraction of spherical targets when the coordinates of large-scale combined measurements are unified. The algorithm improves the iteration efficiency by constraining the sample point quality through the principal curvature. Considering the errors in both the coefficient matrix and observation matrix when fitting the point cloud data, the method uses the total least squares algorithm to optimally estimate the sphere parameters.
(2) Experimental results show that this method can automatically detect spherical objects in point clouds. The experimental results show that the proposed method has better detection accuracy and detection speed than the traditional RANSAC algorithm. The method was also applied to the coordinate unification of large-scale combined measurements. The practicability of the proposed algorithm is proved.

Author Contributions

Conceptualization, Q.W.; data curation, C.G.; investigation, G.S.; methodology, J.L.; project administration, Z.L.; writing—original draft, Q.W.; writing—review and editing, B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China [grant number 51275158]; Development and industrialization of digital maintenance and measurement platform for aircraft landing gear [grant number 4201/01572].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study.

References

  1. Baek, J.; Park, J.; Cho, S.; Lee, C. 3D Global Localization in the Underground Mine Environment Using Mobile LiDAR Mapping and Point Cloud Registration. Sensors 2022, 22, 2873. [Google Scholar] [CrossRef] [PubMed]
  2. Jiang, T.; Cheng, X.; Cui, H.; Li, X. Combined shape measurement based on locating and tracking of an optical scanner. J. Instrum. 2019, 14, P01006. [Google Scholar] [CrossRef]
  3. Sladek, J.; Blaszczyk, P.M.; Kupiec, M.; Sitnik, R. The hybrid contact–optical coordinate measuring system. Measurement 2011, 44, 503–510. [Google Scholar] [CrossRef]
  4. Weckenmann, A.; Krämer, P.; Hoffmann, J. Manufacturing Metrology-State of the Art and Prospects. In Proceedings of the 9th International Symposium on Measurement and Quality Control, Chennai, India, 21–24 November 2007. [Google Scholar]
  5. Chen, Z.; Zhang, F.; Qu, X.; Liang, B. Fast Measurement and Reconstruction of Large Workpieces with Freeform Surfaces by Combining Local Scanning and Global Position Data. Sensors 2015, 15, 14328–14344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. George, V.; Sander, D. 3D building model reconstruction from point clouds and ground plans. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2001, 34, 37–44. [Google Scholar]
  7. Wei, J.; He, M.; Xiong, B.; Feng, Y. Algorithm for Finding Registration Sphere Center in Merging Multiple-View 3D Point Clouds. J. Comput.-Aided Des. Comput. Graph. 2006, 18, 416. [Google Scholar]
  8. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  9. Fan, M.; Lee, T. Variants of Seeded Region Growing. IET Image Processing 2015, 9, 478–485. [Google Scholar] [CrossRef]
  10. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2010, 26, 214–226. [Google Scholar] [CrossRef]
  11. Liu, W.I. Novel method for sphere target detection and center estimation from mobile terrestrial laser scanner data. Measurement 2019, 137, 617–623. [Google Scholar] [CrossRef]
  12. Ebrahimi, A.; Czarnuch, S. Automatic Super-Surface Removal in Complex 3D Indoor Environments Using Iterative Region-Based RANSAC. Sensors 2021, 21, 3724. [Google Scholar] [CrossRef] [PubMed]
  13. Oh, S.; Lee, D.; Kim, M.; Kim, T.; Cho, H. Building Component Detection on Unstructured 3D Indoor Point Clouds Using RANSAC-Based Region Growing. Remote Sens. 2021, 13, 161. [Google Scholar] [CrossRef]
  14. Camurri, M.; Vezzani, R.; Cucchiara, R. 3D Hough transform for sphere recognition on point clouds: A systematic study and a new method proposal. Mach. Vis. Appl. 2014, 25, 1877–1891. [Google Scholar] [CrossRef]
  15. Abuzaina, A.; Nixon, M.S.; Carter, J.N. Sphere Detection in Kinect Point Clouds via the 3D Hough Transform. Comput. Anal. Images Patterns 2013, 8048, 290–297. [Google Scholar]
  16. Li, A.; Junfu, Y.; Yunlan, G. The extraction of spherical characteristics from point cloud based on improved hough transform. Eng. Surv. Mapp. 2016, 25, 4. [Google Scholar]
  17. Minglei, L.; Guangyun, L.; LI, W. Automatic Feature Detecting from point clouds using 3D Hough transform. Bull. Surv. Mapp. 2015, 29–33. [Google Scholar] [CrossRef]
  18. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  19. Nurunnabi, A.; Belton, D.; West, G. Robust Segmentation in Laser Scanning 3D Point Cloud Data. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques & Applications, Fremantle, WA, Australia, 3–5 December 2012. [Google Scholar]
  20. Shi, H. Automatic registration of laser point cloud using precisely located sphere targets. J. Appl. Remote Sens. 2014, 8, 5230–5237. [Google Scholar]
  21. Huang, J.; Wang, Z.; Bao, W.; Gao, J. A High-Precision Registration Method Based on Auxiliary Sphere Targets. In Proceedings of the 2014 International Conference on Digital Image Computing: Techniques & Applications, Wollongong, NSW, Australia, 25–27 November 2014. [Google Scholar]
  22. Shuyang, S. Study on Sphere Detection Methods Based on Three-Dimensional Point Clouds; Northwest University: Xi’an, China, 2020. [Google Scholar]
  23. Yang, R.; Li, J.; Meng, X.; You, Y. A Rigorous Feature Extraction Algorithm for Spherical Target Identification in Terrestrial Laser Scanning. Remote Sens. 2022, 14, 1491. [Google Scholar] [CrossRef]
  24. Cormo, B.D. Differential Geometry of Curves and Surfaces. Comput. Aided Eng. Des. 2013, 2, 273–275. [Google Scholar]
  25. Zhang, X.; Gao, X.J. Registration of Point Clouds Based on Differential Geometry of Surface’s Feature. Appl. Mech. Mater. 2011, 101–102, 232–235. [Google Scholar] [CrossRef]
  26. Eskandari, M.; Laurendeau, D. Covariance Based Differential Geometry Segmentation Techniques for Surface Representation Using Vector Field Framework. J. WSCG 2020, 28, 169–178. [Google Scholar]
  27. Yang, X.; Huang, Y.; Zhang, Q. Automatic Stockpile Extraction and Measurement Using 3D Point Cloud and Multi-Scale Directional Curvature. Remote Sens. 2020, 12, 960. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Diagram of combined measurement system.
Figure 1. Diagram of combined measurement system.
Sensors 22 05850 g001
Figure 2. Algorithm flow chart.
Figure 2. Algorithm flow chart.
Sensors 22 05850 g002
Figure 3. Local coordinate system L.
Figure 3. Local coordinate system L.
Sensors 22 05850 g003
Figure 4. Point cloud of noisy spherical surface at different scales. (a) The proportion of the number of point clouds to noise points in the spherical model is 10%; (b) the proportion of the number of point clouds to noise points in the spherical model is 20%; (c) the proportion of the number of point clouds to noise points in the spherical model is 30%; (d) the proportion of the number of point clouds to noise points in the spherical model is 40%.
Figure 4. Point cloud of noisy spherical surface at different scales. (a) The proportion of the number of point clouds to noise points in the spherical model is 10%; (b) the proportion of the number of point clouds to noise points in the spherical model is 20%; (c) the proportion of the number of point clouds to noise points in the spherical model is 30%; (d) the proportion of the number of point clouds to noise points in the spherical model is 40%.
Sensors 22 05850 g004
Figure 5. The standard deviation of sphere parameters estimated by different methods of simulation data.
Figure 5. The standard deviation of sphere parameters estimated by different methods of simulation data.
Sensors 22 05850 g005
Figure 6. Experimental apparatus and scanning results. (a) Three-dimensional scanner; (b) actual scanned point cloud.
Figure 6. Experimental apparatus and scanning results. (a) Three-dimensional scanner; (b) actual scanned point cloud.
Sensors 22 05850 g006
Figure 7. The average radius difference obtained by each algorithm in detecting the spherical surface.
Figure 7. The average radius difference obtained by each algorithm in detecting the spherical surface.
Sensors 22 05850 g007
Figure 8. Coordinate unification experiment diagram: (a) measurement of target 1 and standard rod sphere 1; (b) measurement of target 2 and standard rod sphere 2.
Figure 8. Coordinate unification experiment diagram: (a) measurement of target 1 and standard rod sphere 1; (b) measurement of target 2 and standard rod sphere 2.
Sensors 22 05850 g008
Figure 9. The effect of point cloud data coordinate unification.
Figure 9. The effect of point cloud data coordinate unification.
Sensors 22 05850 g009
Table 1. Sphere detection results of simulation data for each method.
Table 1. Sphere detection results of simulation data for each method.
WFitting MethodSphere Parameter (mm)Time (s)
xyzr
40RANSAC19.95130.01739.90714.8905.24
3D Hough20.15830.08739.88714.84110.25
PC-RANSAC20.15829.98940.05914.9473.85
30RANSAC20.12029.88140.15615.1459.35
3D Hough20.46129.43840.37915.43815.24
PC-RANSAC20.11429.95639.97914.9254.22
20RANSAC19.81929.77640.29414.57513.45
3D Hough20.75529.31239.32414.15217.35
PC-RANSAC19.92430.01839.85714.8835.31
10RANSAC20.85529.43739.32214.44715.45
3D Hough21.75528.53241.20413.68220.54
PC-RANSAC20.14929.83639.84314.8557.53
Table 2. Sphere detection results of the data for each method.
Table 2. Sphere detection results of the data for each method.
SphereDetection MethodEstimated Sphere Parameters (mm)
xyzr
Sphere 1RANSAC−47.62515.835587.50714.903
PC-RANSAC−47.38716.021587.68115.021
Sphere 2RANSAC−45.029−43.562583.51614.885
PC-RANSAC−45.352−43.806583.75214.972
Table 3. Average sphere center distance detected by each algorithm.
Table 3. Average sphere center distance detected by each algorithm.
Detection MethodDistance between the Centers of the Two Spheres (mm)
RANSAC59.587
PC-RANSAC59.990
Table 4. Detection time of spherical surface for each algorithm.
Table 4. Detection time of spherical surface for each algorithm.
Detection MethodTime (s)
RANSAC8.58
PC-RANSAC4.62
Table 5. Target ball center values for articulated arm calibration.
Table 5. Target ball center values for articulated arm calibration.
GroupSerial NumberThe Coordinates of Sphere Centers (mm)
xyz
Target 1Sphere 1386.703−15.863239.877
Sphere 2425.466−10.260265.832
Sphere 3368.809−10.346285.568
Target 2Sphere 1116.247−15.956−561.551
Sphere 2126.0159.198−616.762
Sphere 376.307−8.218−600.384
Table 6. Sphere center values of the scanner’s detection target.
Table 6. Sphere center values of the scanner’s detection target.
GroupSerial NumberThe Coordinates of Sphere Centers (mm)
xyz
Target 1Sphere 1386.703−15.863239.877
Sphere 2425.466−10.260 265.832
Sphere 3368.809−10.346285.568
Target 2Sphere 1116.247−15.956−561.551
Sphere 2126.0159.198−616.762
Sphere 376.307−8.218−600.384
Standard rod sphereSphere 178.436−33.811−437.573
Sphere 270.601−22.050 −435.581
Table 7. Measurement data after coordinate unification.
Table 7. Measurement data after coordinate unification.
GroupSerial NumberThe Coordinates of Sphere Centers (mm)
xyz
Target 1Sphere 1386.729−15.868239.832
Sphere 2425.442−10.260 265.833
Sphere 3368.806−10.34285.612
Target 2Sphere 1116.37−15.935−561.539
Sphere 2126.0569.176−616.689
Sphere 376.143−8.217−600.469
Standard rod sphereSphere 1485.28415.305234.505
Sphere 2210.39415.931−582.346
Table 8. Data of ball center distance after coordinate unification.
Table 8. Data of ball center distance after coordinate unification.
Serial NumberDistance between the Centers of the Two Spheres (mm)
1861.865
2861.793
3861.926
4861.966
5861.864
6861.926
7861.842
8861.867
9861.836
10861.872
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, Q.; Liu, J.; Gao, C.; Wang, B.; Shen, G.; Li, Z. Improved RANSAC Point Cloud Spherical Target Detection and Parameter Estimation Method Based on Principal Curvature Constraint. Sensors 2022, 22, 5850. https://doi.org/10.3390/s22155850

AMA Style

Wu Q, Liu J, Gao C, Wang B, Shen G, Li Z. Improved RANSAC Point Cloud Spherical Target Detection and Parameter Estimation Method Based on Principal Curvature Constraint. Sensors. 2022; 22(15):5850. https://doi.org/10.3390/s22155850

Chicago/Turabian Style

Wu, Qinghua, Jiacheng Liu, Can Gao, Biao Wang, Gaojian Shen, and Zhiang Li. 2022. "Improved RANSAC Point Cloud Spherical Target Detection and Parameter Estimation Method Based on Principal Curvature Constraint" Sensors 22, no. 15: 5850. https://doi.org/10.3390/s22155850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop