Next Article in Journal
Toward Security Enhanced Provisioning in Industrial IoT Systems
Next Article in Special Issue
Coil Inductance Model Based Solenoid on–off Valve Spool Displacement Sensing via Laser Calibration
Previous Article in Journal
A Soft–Hard Combination Decision Fusion Scheme for a Clustered Distributed Detection System with Multiple Sensors
Previous Article in Special Issue
Laser Displacement Sensors for Wind Tunnel Model Position Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extrinsic Calibration of 2D Laser Rangefinders Using an Existing Cuboid-Shaped Corridor as the Reference

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China
3
Department of Remote Sensing and Photogrammetry, Center of Excellence in Laser Scanning Research, Finnish Geospatial Research Institute, 02430 Masala, Finland
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(12), 4371; https://doi.org/10.3390/s18124371
Submission received: 7 October 2018 / Revised: 6 December 2018 / Accepted: 7 December 2018 / Published: 10 December 2018
(This article belongs to the Special Issue Laser Sensors for Displacement, Distance and Position)

Abstract

:
Laser rangefinders (LRFs) are widely used in autonomous systems for indoor positioning and mobile mapping through the simultaneous localization and mapping (SLAM) approach. The extrinsic parameters of multiple LRFs need to be determined, and they are one of the key factors impacting system performance. This study presents an extrinsic calibration method of multiple LRFs that requires neither extra calibration sensors nor special artificial reference landmarks. Instead, it uses a naturally existing cuboid-shaped corridor as the calibration reference, and it hence needs no additional cost. The present method takes advantage of two types of geometric constraints for the calibration, which can be found in a common cuboid-shaped corridor. First, the corresponding point cloud is scanned by the set of LRFs. Second, the lines that are scanned on the corridor surfaces are extracted from the point cloud. Then, the lines within the same surface and the lines within two adjacent surfaces satisfy the coplanarity constraint and the orthogonality constraint, respectively. As such, the calibration problem is converted into a nonlinear optimization problem with the constraints. Simulation experiments and experiments based on real data verified the feasibility and stability of the proposed method.

1. Introduction

A 2D laser rangefinder (LRF) can provide an accurate range with high angular resolution over long distances [1]. As it has lower power consumption, smaller size, and lower cost compared to general 3D laser scanners, it is widely used in autonomous systems such as robot positioning [2], navigation [3], and mobile mapping [4,5] through the simultaneous localization and mapping (SLAM) technique [6]. Many systems, such as the light detection and ranging (LiDAR)-based SLAM system [7,8], employ several 2D LRFs simultaneously to perform sensor fusion, which reinforces the concern of extrinsic calibration for fusing all the LiDAR point cloud into a global reference frame. Moreover, the performances of mapping and SLAM are sensitive to calibration errors, especially when they have a long working range, so that small rotation errors can produce significant distortions in the map [9].
The calibration of the sensors is generally divided into intrinsic parameter calibration and extrinsic parameter calibration. The intrinsic parameters relate to the acquisition process and involve issues that are both temporal and geometric [10,11]. The extrinsic parameters determine the pose transformation relationship among multiple sensors or between one sensor and a reference coordinate system [12]. The work of this paper focuses on the extrinsic calibration of multiple 2D LRFs, assuming that the intrinsic calibration has been done.
The extrinsic calibration of multiple 2D LRFs is more difficult than the extrinsic calibration of multiple 3D laser scanners [13,14], multiple depth cameras [15], or even the extrinsic calibration between one LRF and one camera [16,17,18]. The latter can be done by placing retroreflective targets or using features that are easily recognized in 3D scenes, whereas it is hard to find discriminative features in a 2D LRF point cloud. Although multiple LRFs can be indirectly calibrated by doing the calibration between each LRF and another type of sensor, such as a camera, not all LRFs have enough of a common field of view with the camera, and the propagation of error can also affect the final calibration accuracy.
Some applications may only be concerned with three degrees of freedom (DOF) of external reference calibration on the horizontal scanning plane. For example, in References [19,20], the three-DOF extrinsic calibration of multiple LRFs on the scanning plane was achieved by matching the target motion trajectories in the overlapping scanning region of the LRFs. However, for many applications, such as 3D mapping applications, there may be large angles between scanning planes [8,12], and then a six-DOF extrinsic calibration needs to be done. The authors of Reference [12] made a special facility based on the scanning planes of multiple LRFs to ensure all the LRFs were able to scan the small V-shaped targets mounted on the facility, and then they used the centers of the targets as control points for the extrinsic calibration. Obviously, this method required manual targets to be made based on pre-measured installation configurations, and this method did not automatically solve the problem of finding distinguishing features on multiple LRF scan planes.
However, although it is difficult to directly find corresponding discriminative features in LRF point clouds, some geometric characters can be used to indirectly estimate the control points located outside the scanning planes of LRFs. As well as using spherical targets to do an extrinsic calibration of multiple Kinect sensors [21], the 2D LRF can obtain a circular arc when scanning a spherical target, the position of the spherical center outside of the scanning plane can be derived according to the known radius of the target, and then the extrinsic calibration between multiple LRFs can be performed using the corresponding target centers in all the frame data [22,23]. In Reference [24], the extrinsic calibration between a single LRF and a fixed 3D reference frame was done by designed targets using the inherent geometrical characteristics formed by a cone and a pyramid. Additionally, the authors in Reference [25] used conic targets to perform an extrinsic calibration between a 3D laser scanner and a 2D LRF. In References [26,27,28], geometric-based methods were used in the extrinsic calibration between a single LRF and a camera. Reference [26] used a scene corner to form a line-to-plane and a point-to-plane constraint, Reference [27] used a simple folding pattern to form a rotation constraint and a point-to-plane constraint, and Reference [28] used a V-shaped pattern to form a point-to-plane constraint. All these geometric constraints can convert the calibration problem to an error minimization problem so as to achieve extrinsic calibration.
In Reference [29], the authors used the geometric constraints formed by static objects (buildings) to achieve the extrinsic calibration between a 3D laser scanner and an inertial measurement unit (IMU) sensor. In References [30,31,32], the calibration between a 2D LRF and an IMU, and a GNSS and a body reference frame of the multisensor system, was deployed by using differently orientated georeferenced planar surfaces. With the linking component from a 3D scanner, the calibration could be done by minimizing distance between the pointwise-observed referenced surfaces and their nominal position. If this method were used for calibrating the 2D LRFs, however, multiple line segments could be gotten instead of line-plane pairs, so the problem would be totally different.
According to the literature surveyed, in recent years some research works have directly calibrated the extrinsic parameters of multiple LRFs based on geometric constraints, and because the geometric constraints they used could be found in daily life, the man-made scanning target was omitted. References [33,34] used a coplanarity constraint and an orthogonality constraint formed by a pair of perpendicular planes for the extrinsic calibration of LRFs. Generally, this geometric pattern can be found in daily life, such as the corner portion of a vertical wall outside a building. The author of Reference [35] used two kinds of coplanarity constraints formed by a large plane to do the extrinsic calibration of LRFs. This method requires a flat surface that is large enough and flat enough, such as a playground floor. It requires at least three LRFs. According to our practical experience, for the methods in References [33,34], it is difficult to find a corner that is large enough, free of obstacles, and not made of reflective or light-transmissive glass in our modern living environment. This paper proposes a method using the same geometric constraints but choosing a common cuboid-shaped corridor as the georeferenced target. That is, the method does not need extra sensors or artificial landmarks, and it utilizes a coplanarity constraint and an orthogonality constraint formed by a corridor, rotating the multiple LRFs with different orientations to perform the extrinsic calibration.
The rest of the paper is organized as follows: Section 2 introduces the basic calibration principle of this method; Section 3 describes the detail of the calibration methodology; Section 4 shows the simulation experiments and the experiments based on real data; and Section 5 and Section 6 are the discussion and the conclusion of this work.

2. Calibration Principle

This section explains the objective of an extrinsic calibration, the reason why a cuboid-shaped corridor can be used to do the calibration, and the calibration procedures.

2.1. Objective

An indoor mapping device was taken as an example, as shown in Figure 1. It is a laser-based backpack and trolley device. Three 2D LRFs and one inertial measurement unit (IMU) were installed on the device, and their own coordinate frames are shown in Figure 1b. For the device, the extrinsic parameters of the four sensors needed to be calibrated, but only the calibration of the three LRFs is discussed here: It was also planned to be the premise of the IMU extrinsic calibration. Thus, the objective of the extrinsic calibration was to obtain the relative rotation and translation parameters between the three LRFs.
The three LRFs are denoted as LRF1, LRF2, and LRF3 from top to down, and the coordinate frames of them are denoted as S1, S2, and S3. Let [R1|T1], [R2|T2], [R3|T3] 3 be the LRF poses with respect to a common reference frame. For the hardware system, the frame of the IMU may be used as a device frame and also as a reference frame, but for the convenience of calibration, S1 is taken as the reference frame. So, the final calibration results are [R2|T2] and [R3|T3], and in terms of values are three Euler angles and three translation values for each of [R2|T2] and [R3|T3].

2.2. Geometric Constraints

The scanning plane of each LRF is a 2D plane in the 3D real word. Ideally, when it is used to scan a cuboid-shaped corridor, a parallelogram is gotten, as shown in Figure 2a (the 90° dead zone of the LRF should be noted, and the z axis is behind the 2D figure, actually, according to the right-hand rule).
When scanning the cuboid-shaped corridor using multi-LRFs, multiple parallelograms can be gotten, and all the sides of the parallelograms are on the surfaces of the corridor. As shown in Figure 2b, C i a and I i a are the center points and the line vectors of the lines scanned by LRFi (i = 1, 2, 3; a = 1, 2, 3, 4) on the surface a. After converting these scanned 2D lines into 3D space, there are two types of geometric constraints, the coplanarity constraint and the orthogonality constraint.
The coplanarity constraint means that the lines scanned by LRFs, which lie on the same surface, should be on the same 3D plane. That is, the distance between the two coplanar lines should be zero. Thus, the coplanarity constraint is expressed as follows:
( R 1 I 1 a × R 2 I 2 a ) · ( R 1 C 1 a + T 1 R 2 C 2 a T 2 ) = 0 ,
where a denotes the surface the lines are on. The first-half part of the left item of the equation can represent the distance vector between them, and then the left item of the equation equals the distance value.
The orthogonality constraint means that two adjacent corridor surfaces should be perpendicular to each other, whereas the normal vector of the surface can be expressed by the cross product of the line vectors. Thus, the orthogonality constraint is expressed as follows:
n a · n b = ( R 1 I 1 a × R 2 I 2 a ) · ( R 1 I 1 b × R 2 I 2 b ) = 0 ,
where a and b denote two adjacent surfaces of the corridor. Generally, the cross product of vectors of two lines can be taken as the normal vector of the plane they are on. Then the perpendicular relationship of the two planes means that the dot product of their normal vectors should be zero.
The geometric constraints are the key point of the calibration, and the initial poses of LRFs are given at first. Because they deviate from the correct poses, the scanned lines do not satisfy the coplanarity constraint and orthogonality constraint. The calibrated poses can be gotten by finally minimizing this deviation.
However, obviously, the extrinsic calibration of a LRF is a nonlinear problem with six DOFs, whereas just one corridor observation cannot provide full six DOF constraints. As shown in Figure 2b, assume the calibration is done with all the lines meeting the geometric constraints using just one observation, and the translation parameter along the y axis is still uncertain. Therefore, in order to provide sufficient constraints by the corridor observations, the device must be required to gather data while rotating inside the corridor with different orientations. The specific rotating operation will be stated and discussed in Section 4.1 and Section 5.

2.3. Calibration Procedure

The entire calibration solution consists of three key points: (1) Line detection; (2) the acquiring of coplanar line pairs and orthogonality line pairs with respect to the coplanarity constraint and the orthogonality constraint, respectively; (3) solving the calibration problem with these line pairs as input.
After the scanned lines are detected from the raw LRF data, it is still unknown which lines lie on the same corridor surface as well as which coplanar line pairs lie on two neighboring corridor surfaces, even when the initial poses of LRFs are given at first. In fact, the problem of getting these line pairs means finding the correct corridor observation based on each group of data. Hence, the line sorting method, the generating of all corridor observation candidates, and the method of finding correct corridor observations were designed to solve this problem. The whole calibration procedure is shown in Figure 3. The detailed explanation of these processes is carried out in Section 3.

3. Method

3.1. Line Detection

A random sample consensus (RANSAC) [36] is used to detect the lines in the 2D points scanned by LRFs. As can be seen in Figure 2a, there exists a discontinued line (line “A–D”) due to the dead view zone of LRFs (or due to other reasons such as reflection), and this can be clustered as the same line by RANSAC.
In order to enhance the robustness of detecting lines using RANSAC, some modifications were made on the basis of traditional algorithms:
(1) The nearest available distance threshold was added. If a person is holding a device to rotate a device equipped with LRFs during the experiment, it is possible to scan a portion of the human body that is closer to the distance, and the nearest available distance threshold according to the calibration scene can filter out such interference.
(2) The farthest available distance threshold was added. First, the common LRF sensors can scan to a distance of tens of meters. If the calibrated scene has only a limited distance or the further scene is too messy, then it is necessary to add the farthest available distance threshold to avoid impact from the further scene. Second, considering the issue of balance (related to the methods in Section 3.4), the detected lines are not allowed to be too long.
(3) The minimum limit of the distance between two sampled points was added. Assuming that the two sampled points are on the same line, the greater the distance between them, the closer the line is to the actual line. On the other hand, it is also considered that when the LRF scans a close-range target, the point density is relatively large due to the fixed angular resolution, and even a short line may have many inliers, so this limitation also limits the interference of the near object to the line detection to some extent.
(4) Traditional RANSAC saves the model whenever it finds enough inliers. However, considering multiple lines should be detected, and inliers are not shared between them, an inner loop with a fixed number of repetitions is added to ensure that lines with more inliers have a higher extracted priority.
The line detection algorithm is shown in Algorithm 1.
Algorithm 1. Line detection based on modified random sample consensus (RANSAC).
Input:
  2D points: P = { p i } = { x i , y i }
  Nearest valid distance: N V D
  Farthest valid distance:   F V D
  Threshold used to differentiate between inliers and outliers: ε
  Shortest line length: S L L
  Minimum number of inliers: M N I
  Maximum outer loop times: M O L T
  Fixed inner loop times: F I L T
Output:
  The number of lines: n L
  All the lines: L s
  The inliers for each line: I n l i e r s
Procedure:
  (1) S C Remove the points that are too near (using N V D ) or too far (using F V D ) from P ;
  (2) Repeat sampling two points p 1 and p 2 from S C within F I L T times until the distance between p 1 and p 2 is bigger than S L L ;
  (3) Generate a 2D line model based on p 1 and p 2 :   { A , B , C }   ( A x + B y + C = 0 ,   A 2 + B 2 = 1 ) ;
  (4) Get the number of points to satisfy | A x + B y + C | < ε (inliers) in S C ;
  (5) Repeat (2), (3), and (4) for F I L T times, and get the line L with the biggest number of inliers;
  (6) If the number of inliers based on L is bigger than M N I , then L refine L with the least-square method and recompute the inliers based on L , else return all the lines and the number of them now;
  (7) Save L to L s and save the inliers to I n l i e r s ;
  (8) S C Remove the inliers from S C based on model L ;
  (9) Repeat (2)–(8) until the number of points in S C is less than M N I or the number of repetitions is greater than M O L T ;
  (10) Return the number of lines n L , all the lines L s , and I n l i e r s .

3.2. Line Sorting

The lines detected by the method based on modified RANSAC are unordered with respect to the real physical corridor surfaces. The purpose of line sorting is to sort these lines with respect to the thee-surfaces order and to reduce the complexity of finding the correct corridor observation (details are in Section 3.3 and Section 3.4). This means each line is set a line index. Suppose there are n   ( n 4 ) straight lines, then one of them is chosen to be the first line, and other lines are set an index between 2 and 4 with respect to the physical order.
The line sorting method is applied as follows:
(1) compute the center points of each line;
(2) sort the lines by the angle between the vector from zero point to the center point and the vector of the positive x axis direction (as shown in Figure 4);
(3) set the line index of the first line as “1st”, and infer whether the line with the next index is an opposite line or the previous adjacent line or next adjacent line, and repeat this until each line is set an index.
In the third step, the relationship between two lines is inferred by the vector of the line and the vector from zero point to its center point for each line. Assume v 1 and v 2 are the two vectors of line l 1 and line l 2 , whose line indexes are i 1 and i 2 . If | v 1 × v 2 | 0 , then the two lines are on two opposite surfaces, which means | i 1 i 2 | = 2 , else they are on two adjacent surfaces. Then, if l 1 and l 2 are on two adjacent surfaces, assume u 1 and u 2 are the two vectors from zero point to the center points of l 1 and l 2 . If ( u 1 × u 2 ) · Z > 0 ( Z is the vector of the positive direction of the z axis), then i 2 = m o d ( i 1 , 4 ) + 1 , otherwise, i 1 = m o d ( i 2 , 4 ) + 1 .

3.3. Generating All Corridor Observation Candidates in Each Group of Data

After the line sorting process, a sorted line set is gotten in each group of data:
S L = { S S L i }   ( i = 1 , 2 , n L R F s ) ,
where n L R F s is the number of LRFs, and S S L i is the subsorted line set, which comes from the sorted lines of LRFi,
S S L i = { L i j }   ( i = 1 , 2 , n L R F s ) ,  
where L i j denotes the sorted line with an index of j in the lines from LRFi. Each LRF can get up to 4 lines in each group of data, which is based on the fact that each LRF can get at most 1 line on each surface of the corridor. Thus, the number of elements in S S L i is between 0 and 4. It should be noted that the index of the lines may be not continuous. For example, if there are only two lines detected from LRF2, and they lie on opposite corridor surfaces, then S S L 2 = { L 2 1 , L 2 3 } , according to Section 3.2.
It is still unknown which line pairs are on the same surface as well as which coplanar line pairs lie on two neighboring corridor surfaces after the above processing. The corridor observation is used to describe this problem.
A corridor observation can be taken as a line container. Specifically, because the corridor has four surfaces, one corridor observation consists of four line sets,
C O = { S 1 , S 2 , S 3 , S 4 } ,
where S a   ( a = 1 , 2 , 3 , 4 ) denotes the line set with respect to corridor surface a :
S a = { L i a }   ( a = 1 , 2 , 3 , 4 ; i = 1 , 2 , n L R F s ) .
In order to get C O , all the lines in S L need to be arranged in the four line containers of the C O . By comparing Equation (4) with Equation (6), it is found that the essence of the problem is how to convert L i j into L i a . In other words, it is how to convert the line index into the corridor surface index with the interval of indexes staying the same. In this way, the lines with the same surface index are coplanar line pairs, and the lines with adjacent surface indexes are on two surfaces that are perpendicular to each other.
Suppose that there are 3 lines from LRF1, 2 lines from LRF2, and 4 lines from LRF3. If the line sequence in each LRF is not being considered, then the number of all the corridor observation candidates is A 4 3 × A 4 2 × A 4 4 = 6912 (it is an arrangement problem, where each LRF can get at most 1 line in each surface of the corridor). However, when the line sequence is considered, and assuming the two lines in LRF2 lie on two opposite corridor surfaces, then the generating of all corridor observation candidates can be described as putting the lines into the four surfaces (line containers), which is shown in Figure 5.
To get all the corridor observation candidates, the surface indexes of the lines for each LRF should be moved together. If there are no adjacent lines in one LRF, such as LRF2 here, then the line sequence should be fixed. Otherwise, the line sequence should be double (such as LRF3, and the opposite sequence is L4, L3, L2, L1). However, regardless of the double sequences, the direction of movement must be fixed. Thus, there are ( 1 × 4 ) × ( 2 × 4 ) = 32 (where “1” and “2” stand for a fixed sequence and double sequence, respectively) corridor observation candidates. If there exist adjacent lines of LRF2, the number doubles.
If there is a special case where no lines are detected in the point cloud of one of the LRFs, then this observation is abandoned. Otherwise, finally, the total number of all possible observations is as seen in Equation (7):
τ = 4 × 2 1 × 2 × 4 × 2 3 ,
where i ( i = 1 , 2 , 3 ) is related to whether the lines from LRF2 and LRF3 need to be moved in a double sequence. If there is one line or two opposite lines detected in LRFi, then i = 0 . Otherwise, i = 1 .
In addition, although the calibration is working with pairs of LRFs, considering all the lines from three LRFs into the generation makes the assessment of the corridor observation more stable.

3.4. Finding the Correct Corridor Observation in Each Group of Data

Among all the corridor observation candidates generated, a corridor observation assessment method based on a coplanarity assessment method is used to assess them, and the corridor observation with lowest error is taken as the correct corridor observation.
In Reference [33], the authors did the calibration by scanning perpendicular planes, which generated all the two possible corner observations for each pair of LRFs (each LRF gets two lines totally on the corner), and then they used RANSAC to filter out the incorrect corner observations based on the initialized relative poses of LRFs. This meant that at least half of the observations were totally incorrect from the original input observations, and the inliers were different at each calibration experiment even with the same data and the same parameters due to the corner observation selection method, which led to an unstable calibration result. In this method, with the line sorting result and the generating of all corridor observation candidates above, the correct corridor observation, which means all the coplanar line pairs and the neighboring surface pairs are all correct, can be gotten by using the corridor observation assessment method, which is also based on the coplanarity assessment method.

3.4.1. Coplanarity Assessment Method

The line detection procedure generates the inliers of the lines, so it is easy to get the two end points of each line. The volume of the tetrahedron, whose vertices are the four end points of the two lines, is used to assess the coplanarity of the two lines. The two lines are more inclined to be a coplanar line pair if their tetrahedron is small, as shown in Figure 6 (the box has no relation to the method, and it is to make the figure be more like a 3D one).
Assume a , b , and c are the three vectors from one vertex to the other three vertices in one tetrahedron, and then the volume of the tetrahedron can be computed easily by the below equation with very little computation:
V tetrahedron = | ( a × b ) · c 6 | .
Considering the endpoints of the lines are the points scanned by LRFs, they may not lie on the detected lines, which can produce a small error. Thus, the endpoints in the inliers are replaced by the projection of them onto the line.

3.4.2. Corridor Observation Assessment Method

With all possible corridor observations generated by the method in the previous section, each of them can be assessed by the sum of all the volume of tetrahedrons in all the four corridor surfaces:
V C O = w = 1 4 Φ ( l 1 , l 2 , l n w ) ,
where n w denotes the number of lines in the corridor surface w , the function Φ represents the sum of the volumes of the tetrahedrons formed by all the line segment pairs that can be formed by lines from l 1 to l n w , and V CO is the final assessment score of the corridor observation.
The coplanarity assessment score represents the error formed by the condition that the two lines are taken as a coplanar line pair, and the corridor assessment score represents the error formed by the condition that the line sets in C O are taken as a corridor observation. Thus, the corridor observation with the smallest sum volume of tetrahedrons is taken as the correct observation finally.

3.5. Calibration Using All Correct Corridor Observations

One correct corridor observation can be gotten after the assessment process based on one frame of data from all LRFs. The input of the calibration solution is a frame sequence with different poses, so many corridor observations with correct coplanar line pairs and correct perpendicular corridor surface pairs can be gotten. Finally, the calibration can be converted into a nonlinear optimization problem based on the coplanarity constraint and orthogonality constraint, which is expressed as:
argmin { R , T } i = 1 N ( a = 1 4 ω i a ( ( R j I j a × R k I k a ) · ( R j c j a + T j R k c k a T k ) ) 2 + a = 1 3 ω i a , a + 1 ( ( R j I j a × R k I k a ) · ( R j I j a + 1 × R k I k a + 1 ) ) 2 ) ,
where N is the number of corridor observations, j is the index of the LRF to be calibrated, k is the index of the LRF to be referenced, a indicates the surface index of the corridor, and ω i x (the superindex x stands for a or { a , a + 1 } ) is the weight of the corresponding residual from C O i , which is computed through linearization from a first-order Taylor approximation of the error functions.
The resulting nonlinear least squares problem is solved iteratively using Levenberg–Marquardt:
[ μ 2 k Δ T 2 k , , μ m k Δ T m k ] T = ( H + λ   d i a g ( H ) ) 1 g ,
where m is the number of the LRFs, k is the index of the iteration, μ j k ( j = 2 , m ) stands for the rotation increment represented by the exponential map ( e u j R j ) , Δ T m k stands for the translation increment, λ is the Levenberg–Marquardt damping factor, H is the Hessian matrix (a symmetric matrix of dimension 6 ( m 1 ) ), and g is the gradient (a column vector of dimension 6 ( m 1 ) ) of the cost function. As for the specific calculation of H , g , and ω i x , please refer to Reference [33]. After the solving using Levenberg–Marquardt, the rotation matrix is updated using the update exponential map [37] as
R j k + 1 = e μ j k R j k ,   T j k + 1 = Δ T j k + T j k ,   j [ 2 , m ]
from an initial guess for the relative poses of LRFs, which may be obtained from a rough measurement of the rig.

4. Experiments and Analysis

4.1. Simulation

Simulation experiments were based on the LRF scanning geometry model and the corridor geometry model. By setting the initial poses for LRFs and the rotational motion trajectory time series of the device as the input of the simulation, the calibration result could be gotten. The true LRF poses were set in the scanning geometry, so they could be taken as the true values in evaluating the calibration accuracy.
The scanning model of LRFs is based on HOKUYO UTM-30LX 2D LRF, whose characteristics are shown in Table 1. In order to make the simulated observation data close to the actual data, Gauss noise with σ = 0.03 m [38] was added to the simulated data.
Rotation operation is important for the calibration because it is directly related to whether the collected data has sufficient geometric constraints. Considering that some devices may be heavy rather than very convenient for rotating in actual use, Table 2 lists several sets of rotations that can be simulated and applied in almost all situations. The device pose in operation F (random pose) at each moment in the simulation system was random, but it could be used as a random rotation operation in reality. In addition, the rotation sequence of the Euler angles was “z-x-y” (yaw-pitch-roll), and the start pose of the device was the same as the world reference pose in Figure 2b.
The geometric model of the corridor was set to a simple rectangular tubular structure with an infinite length. First, the width and height of the corridor were set to 2 m. The simulated true poses of LRF2 and LRF3 were set to [−80°, 0°, −35°], [−150 mm, 150 mm, −200 mm], and [80°, 0°, −150°], [150 mm, 150 mm, −500 mm], respectively, whereas their simulated initial poses were set to [−90°, 0°, −30°], [−100 mm, 100 mm, −250 mm], and [90°, 0°, −145°], [100 mm, 100 mm, −550 mm], respectively. Each operation gathered 360 frames of data, which was taken as the input of the calibration process. After 10 repeated calibrations (the simulated data was regenerated at each repeat), the error distribution of the calibrated three Euler angles and three translation values of LRF2 based on operations B, C, D, E, and F were calculated and are shown in Figure 7, but not the results based on operation A.
Since operation A was only the rotation of the device in the yaw angle and without pitch angle or roll angle, the data of each frame of LRF1 could only scan the wall on the left and right sides, so only two parallel straight lines were obtained, which could not provide enough constraints. After the 10 repeated calibrations, the mean value of the calibrated Euler angles and three translation values of LRF2 were [−78.56°, 4.43°, −34.89°] and [−159.16 mm, 161.94 mm, −184.25 mm], which deviated from the set true roll angle by more than 4° and deviated from the set true z translation by more than 15 mm.
At the same time, the calibration results based on the operations B, C, D, E, and F were very close to the true value. Although the accuracy based on the five operations appeared to be lightly different from Figure 7, in fact the ease of the operation could be considered more. For example, operation C and operation D were too complicated, so they are not recommended in general. However, if the device is too large or inconvenient to rotate 360°, operation E can be considered. Of course, the actual operation is not limited to these, as long as it is capable of providing sufficient constraints.
The error distribution of 10 repeated simulation experiments with different numbers of observations are shown in Figure 8. Based on operation B, frame sample intervals of 1, 5, 10, and 40 were set to get 360, 72, 36, and 9 observations, respectively, from the original generated 360 frames of simulated data. As can be seen from the figure, in general, the greater the number of observations, the higher the accuracy of the calibration results were.
In order to verify that the method was applicable to corridors of different dimensions, the corridor dimensions with width to height ratios of 1:1, 2:1, and 1:2 were each set under the simulation experiment. The simulated corridor width and height corresponding to these ratios were [2 m, 2 m], [4 m, 2 m], and [2 m, 4 m]. Similarly, the error distribution after 10 repeated times based on operation B and a 5-frame sample interval is shown in Figure 9. Therefore, the different aspect ratios of the corridor had little effect on the calibration.

4.2. Real Data

The 3D indoor mapping device shown in Figure 1 was taken as the calibration device. The device was placed on a trolley to easily do the rotation movements. Figure 10 shows the experiment environment, which is easy to find in many buildings, such as an office building. The device employed three HOKUYO UTM-30LXes (the characteristics of the LRF are shown in Table 1), and the extrinsic calibration of LRF1 and LRF2, and LRF1 and LRF3, were done automatically after the gathering of data.
Figure 11 shows an example of the line detection result and the correct corridor observation selection result. The color of the lines should be noted. The method in Section 3.2 was used to sort the lines detected in each LRF, and they are displayed in red, green, blue, and magenta in order. It is apparent from the right-down figure in Figure 11 that the line segments scanned by the three LRFs could not be perfectly overlapped into a rectangle based on the initial extrinsic parameters.
Figure 12 shows the contrast between an incorrect corridor observation and a correct corridor observation with the same group of data and the same initial poses of LRFs. When the observation was not the correct one, which means that the “coplanar lines” were not in fact on the same surfaces, the sum of the volumes of all the tetrahedrons was greatly larger than the correct one. Hence, with this assessment method, the inputted correct coplanar line pairs and neighboring surface pairs could be ensured to be correct as long as the initial poses were not far from the true poses.
The initial poses of LRF2 and LRF3 were set to [−90°, 0°, −30°], [100 mm, 50 mm, −300 mm], and [90°, 0°, −140°], [−100 mm, 50 mm, −600 mm], respectively. Figure 13 shows the update trajectories of the three Euler angles and the three translations of LRF2 during calibration. The magnitude of the parameter update gradually decreased and gradually approached the target solution as the iteration progressed.
The purpose of the extrinsic calibration was to fusion all the point clouds from multiple LRFs into a global reference frame. Because the corridor was cuboid-shaped, if the calibration result was accurate, then projecting the fused point cloud onto a plane perpendicular to the corridor was a rectangle formed by a plurality of 3D line segments overlapping together. Figure 14 shows the comparison before and after calibration in this case. Therefore, after the calibration, the point clouds from three LRFs could be well merged into a cuboid-shaped corridor, so that the accuracy of the calibration result could be visually verified.
To verify the repeatability of the method, more experimental results are shown in Table 3 and Table 4. Among them, Table 3 is the calibration result based on the device in Figure 1 and experiment scene A, and a total of three sets of data with each set of data was repeatedly calibrated 10 times to obtain the average value and standard deviation of the results. Considering that some devices are large like the device in Reference [39], they may be inconvenient to rotate in small scenes like scene A. Therefore, the configuration of the three LRFs was reconfigured and then recalibrated in scene B (Figure 15). The width and height of the corridor in scene A were 1.7 m and 2.4 m, respectively. Because the corridor was long enough, the three datasets in Table 3 were gathered based on operations B, C, and F. The length of the corridor in scene B was only 6.4 m, and its width and height were 2.2 m and 2.3 m, respectively. Thus, the three datasets in Table 4 were gathered based on operation E.
The initial poses of the LRFs in Table 3 were the same as above, and the initial poses of LRF2 and LRF3 in Table 4 were set to [90°, 0°, −155°], [−100 mm, 150 mm, −150 mm], and [90°, 0°, 155°], [100 mm, 150 mm, −1200 mm], respectively. As can be seen from Table 3 and Table 4, the deviation of the calibration result was kept substantially within 1° and 10 mm. In addition, it could be found that the deviation of the pitch angle was larger than the deviation of the roll angle and the yaw angle, and the deviation of the translation in the x direction was larger than the deviation of the translation in the x direction and the translation in the z direction. This phenomenon was normal for Table 3 because it was the same as the error distribution based on operation B, C, and F (simulation results in Figure 7). But for Table 4, it was because it was difficult to provide a relatively large pitch angle for the device in actual operation, so that the constraint in the y direction was weak, and the constraint in the pitch angle was also weak. Thus, the error distribution was more like the error distribution based on operation B.
To further prove the stability of the method, two batches of calibration experiments were implemented based on the first dataset in Table 3. First, almost all the conditions of the mismeasurement by hand for the initial poses of LRFs were taken into consideration. Assuming that each of the initial Euler angles had three possibilities with deviations of −10°, 0, and +10°, and each of the initial translations had three possibilities with deviations of −100 mm, 0 mm, 100 mm, then there were 3 6 = 729 combinations of all considered deviation possibilities. Taking the calibration of LRF2 as an example, the referenced accurate pose could be gotten from Table 3, and the result distribution of the 729 times of calibration is shown in Figure 16. There were only 19 outliers among all the results, with most of them centered on a small range. In addition, from a practical point of view, the 10° and 100 mm error could be avoided by a manual measurement.
Furthermore, another 729 times of calibration were implemented without deviations, which meant all the initial extrinsic parameters came from the centered results in Figure 16. As shown in Figure 17, the calibration results were all stably distributed around an accurate result. Hence, the calibration method was stable under multiple tests.

5. Discussion

The proposed extrinsic calibration method of 2D LRFs uses a common cuboid-shaped corridor as the experiment environment, which is very common in indoor buildings. The method in Reference [33] was not very suitable for indoor devices due to corners that can meet the requirement being scarce in most of cities, especially in indoor environments. Whether it is a corner or a corridor, it should be of sufficient length and have no material that reflects light or transmits light on its surfaces. Obviously, this method is more suitable under indoor conditions.
Although the method of solving nonlinear optimization problems and the geometric constraints used are the same as the method in Reference [33], the calibration scene used and the processes from the acquiring of the data to the finding of corridor observations are different. First, as for the calibration scenes, the authors in Reference [35] made two extrinsic calibration solutions, one of them based on scanning a flat plane and another one based on scanning perpendicular planes. The former one needed at least three LRFs, and the latter one needed two or three perpendicular planes. The method in this paper is based on scanning a cuboid-shaped corridor, which is with four planes. Second, the line detection in this method was modified based on traditional RANSAC as well as the coplanarity assessment method, and the corridor assessment method was proposed to make sure the inputted observations for the nonlinear solver are correct, resulting in a stable calibration result. These methods can also be extended to other application scenarios.
It should be noted that the device cannot be rotated too fast in actual operation. Since the 2D LRF ranges based on the rotation of a laser beam, each laser point on the scanning plane is not obtained at the same time as other laser points. Thus, if the device moves or rotates fast, the gathered points in each frame are not the same 3D plane (it is the same time in simulation). This is one of the sources of the calibration error, actually. If it moves or rotates too slowly, the amount of data collected is very large, and then frame sampling is required when the calibration is performed, or otherwise the processing time is long. Therefore, the speed of the rotating should not be fast, but it does not need to be too slow.
Future work may focus on reducing the method’s requirements for accuracy of the initial poses. In general, manual measurement errors are generally guaranteed to be within 10 cm because the three translation parameters are independent of each other. However, since the three Euler angles are not independent of each other, even the different Euler angle combinations can generate the same orientation matrix, so if the method of manually measuring the pose angles is inappropriate, it is likely to cause the initial angles to deviate more than 10°.
Moreover, devices equipped with multiple LRFs are often equipped with an IMU, just like the one shown in Figure 1, and the extrinsic calibration of the IMU is often necessary. After the extrinsic calibration of LRFs, the relative pose relationship between them is determined so that they can be considered as a rigid sensor, meaning that they can be used as a 3D laser scanner together. Thus, the calibration of the extrinsic parameters between the IMU and the LRFs is easier than the calibration of the extrinsic parameters of the IMU and a single LRF.

6. Conclusions

An extrinsic calibration method for multiple 2D LRFs by using an existing cuboid-shaped corridor as the reference is proposed. It does not need special artificial targets in the environment and does not need supervised data association. Its only requirement is rotating the LRFs to scan a common indoor cuboid-shaped corridor.
The coplanarity constraint and the orthogonality constraint from an indoor corridor are used for finding the accurate relative poses of LRFs. In order to provide the nonlinear solver correct input, a modified RANSAC algorithm, the line sorting method, the coplanarity assessment method, as well as the corridor assessment method are proposed to improve the stability of the finding of the correct corridor observation. Among them, the modified RANSAC algorithm and the coplanarity assessment method may also be useful for other applications.
The following work may focus on improving the robustness of the initial pose deviation. This may be done by adding other useful geometric constraints. At the same time, based on this work and based on the result of this calibration, the extrinsic calibration of the IMU will also be a future research plan.

Author Contributions

D.Y. designed and implemented the calibration solution and wrote this paper; J.L. put forward many suggestions for the experimental scheme and helped to modify the paper; T.W. provided a lot of theoretical and code debugging help; K.L. helped do the calibration experiment; J.H. and R.C. helped to proofread and checked the paper.

Funding

This study was supported in part by the Natural Science Fund of China with project No. 41874031, the Technology Innovation Program of Hubei Province with Project No. 2018AAA070, and the Natural Science Fund of Hubei Province with Project No. 2018CFA007.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Park, C.-S.; Kim, D.; You, B.-J.; Oh, S.-R. Characterization of the Hokuyo UBG-04LX-F01 2D laser rangefinder. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 385–390. [Google Scholar]
  2. Pérez, J.; Caballero, F.; Merino, L. Enhanced Monte Carlo Localization with Visual Place Recognition for Robust Robot Localization. J. Intell. Robot. Syst. 2015, 80, 641–656. [Google Scholar] [CrossRef]
  3. Atman, J.; Popp, M.; Ruppelt, J.; Trommer, G.F. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles. Sensors 2016, 16, 1516. [Google Scholar] [CrossRef] [PubMed]
  4. Huitl, R.; Schroth, G.; Hilsenbeck, S.; Schweiger, F.; Steinbach, E. TUMindoor: An extensive image and point cloud dataset for visual indoor localization and mapping. In Proceedings of the 2012 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 1773–1776. [Google Scholar]
  5. Zhang, J.; Singh, S. Low-drift and real-time lidar odometry and mapping. Auton. Robots 2017, 41, 401–416. [Google Scholar] [CrossRef]
  6. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
  7. Jung, J.; Yoon, S.; Ju, S.; Heo, J.J.S. Development of kinematic 3D laser scanning system for indoor mapping and as-built BIM using constrained SLAM. Sensors 2015, 15, 26430–26456. [Google Scholar] [CrossRef] [PubMed]
  8. Chen, J.D.; Cho, Y.K. Real-time 3D Mobile Mapping for the Built Environment. In Proceedings of the 33rd International Symposium on Automation and Robotics in Construction (ISARC), Auburn, AL, USA, 18–21 July 2016. [Google Scholar]
  9. Miller, I.; Campbell, M.; Huttenlocher, D. Efficient Unbiased Tracking of Multiple Dynamic Obstacles Under Large Viewpoint Changes. IEEE Trans. Robot. 2011, 27, 29–46. [Google Scholar] [CrossRef]
  10. Morales, J.; Martinez, J.L.; Mandow, A.; Reina, A.J.; Pequeno-Boter, A.; Garcia-Cerezo, A. Boresight calibration of construction misalignments for 3D scanners built with a 2D laser range finder rotating on its optical center. Sensors (Basel) 2014, 14, 20025–20040. [Google Scholar] [CrossRef] [PubMed]
  11. Olivka, P.; Krumnikl, M.; Moravec, P.; Seidl, D. Calibration of Short Range 2D Laser Range Finder for 3D SLAM Usage. J. Sens. 2016, 2016, 3715129. [Google Scholar] [CrossRef]
  12. Jung, J.; Kim, J.; Yoon, S.; Kim, S.; Cho, H.; Kim, C.; Heo, J. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems. Sensors (Basel) 2015, 15, 10292–10314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Debattisti, S.; Mazzei, L.; Panciroli, M. Automated extrinsic laser and camera inter-calibration using triangular targets. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013; pp. 696–701. [Google Scholar]
  14. Gao, C.; Spletzer, J.R. On-line calibration of multiple LIDARs on a mobile vehicle platform. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–7 May 2010; pp. 279–284. [Google Scholar]
  15. Moghadam, P.; Bosse, M.; Zlot, R. Line-based extrinsic calibration of range and image sensors. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3685–3691. [Google Scholar]
  16. Wenkai, C.; Guodong, Y.; Zize, L. An extrinsic calibration method for binocular cameras and swing 2D laser rangefinder. In Proceedings of the 2017 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 3556–3560. [Google Scholar]
  17. Hu, Z.; Li, Y.; Li, N.; Zhao, B. Extrinsic Calibration of 2-D Laser Rangefinder and Camera From Single Shot Based on Minimal Solution. IEEE Trans. Instrum. Meas. 2016, 65, 915–929. [Google Scholar] [CrossRef]
  18. Ahmad Yousef, K.M.; Mohd, B.J.; Al-Widyan, K.; Hayajneh, T. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap. Sensors (Basel) 2017, 17, 2346. [Google Scholar] [CrossRef] [PubMed]
  19. Schenk, K.; Kolarow, A.; Eisenbach, M.; Debes, K.; Gross, H.-M. Automatic calibration of a stationary network of laser range finders by matching movement trajectories. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 7–12 October 2012; pp. 431–437. [Google Scholar]
  20. Quenzel, J.; Papenberg, N.; Behnke, S. Robust extrinsic calibration of multiple stationary laser range finders. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 1332–1339. [Google Scholar]
  21. Ruan, M.; Huber, D. Calibration of 3D sensors using a spherical target. In Proceedings of the 2014 2nd International Conference on 3D Vision (3DV), Tokyo, Japan, 8–11 December 2014; pp. 187–193. [Google Scholar]
  22. Chen, S.; Liu, J.; Wu, T.; Huang, W.; Liu, K.; Yin, D.; Liang, X.; Hyyppä, J.; Chen, R. Extrinsic Calibration of 2D Laser Rangefinders Based on a Mobile Sphere. Remote Sens. 2018, 10, 1176. [Google Scholar] [CrossRef]
  23. Pereira, M.; Silva, D.; Santos, V.; Dias, P. Self calibration of multiple LIDARs and cameras on autonomous vehicles. Robot. Auton. Syst. 2016, 83, 326–337. [Google Scholar] [CrossRef]
  24. Antone, M.E.; Friedman, Y. Fully Automated Laser Range Calibration. In Proceedings of the British Machine Vision Conference 2007 (BMVC), Coventry, UK, 10–13 September 2007; pp. 1–10. [Google Scholar]
  25. Almeida, M.; Dias, P.; Oliveira, M.; Santos, V. 3D-2D laser range finder calibration using a conic based geometry shape. In Proceedings of the International Conference Image Analysis and Recognition, Aveiro, Portugal, 25–27 June 2012; pp. 312–319. [Google Scholar]
  26. Gomez-Ojeda, R.; Briales, J.; Fernandez-Moral, E.; Gonzalez-Jimenez, J. Extrinsic calibration of a 2D laser-rangefinder and a camera based on scene corners. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3611–3616. [Google Scholar]
  27. Li, N.; Hu, Z.; Zhao, B. Flexible extrinsic calibration of a camera and a two-dimensional laser rangefinder with a folding pattern. Appl. Opt. 2016, 55, 2270–2280. [Google Scholar] [CrossRef] [PubMed]
  28. Dong, W.; Isler, V. A Novel Method for the Extrinsic Calibration of a 2D Laser Rangefinder and a Camera. IEEE Sens. J. 2018, 18, 4200–4211. [Google Scholar] [CrossRef] [Green Version]
  29. Keller, F.; Sternberg, H. Multi-Sensor Platform for Indoor Mobile Mapping: System Calibration and Using a Total Station for Indoor Applications. Remote Sens. 2013, 5, 5805–5824. [Google Scholar] [CrossRef] [Green Version]
  30. Heinz, E.; Eling, C.; Wieland, M.; Klingbeil, L.; Kuhlmann, H. Analysis of Different Reference Plane Setups for the Calibration of a Mobile Laser Scanning System. In Proceedings of the Internationalen Ingenieurvermessungskurs, Graz, Austria, 25–29 April 2017; pp. 131–146. [Google Scholar]
  31. Hartmann, J.; Paffenholz, J.A.; Strübing, T.; Neumann, I. Determination of Position and Orientation of LiDAR Sensors on Multisensor Platforms. J. Surv. Eng. 2017, 143. [Google Scholar] [CrossRef]
  32. Heinz, E.; Eling, C.; Wieland, M.; Klingbeil, L.; Kuhlmann, H. Development, Calibration and Evaluation of a Portable and Direct Georeferenced Laser Scanning System for Kinematic 3D Mapping. J. Appl. Geod. 2015, 9, 227–243. [Google Scholar] [CrossRef]
  33. Fernández-Moral, E.; González-Jiménez, J.; Arévalo, V. Extrinsic calibration of 2D laser rangefinders from perpendicular plane observations. Int. J. Robot. Res. 2015, 34, 1401–1417. [Google Scholar] [CrossRef]
  34. Choi, D.-G.; Bok, Y.; Kim, J.-S.; Kweon, I.S. Extrinsic calibration of 2D laser sensors. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 3027–3033. [Google Scholar]
  35. Fernandez-Moral, E.; Arevalo, V.; Gonzalez-Jimenez, J. Extrinsic calibration of a set of 2D laser rangefinders. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2098–2104. [Google Scholar]
  36. Fischler, M.A.; Bolles, R.C.J. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  37. Eade, E. Lie Groups for 2D and 3D Transformations. Available online: http://ethaneade. com/lie. pdf (accessed on 20 May 2017).
  38. Demski, P.; Mikulski, M.; Koteras, R. Characterization of Hokuyo UTM-30LX laser range finder for an autonomous mobile robot. In Advanced Technologies for Intelligent Systems of National Border Security; Springer: Berlin, Heidelberg/Germany, 2013; pp. 143–153. [Google Scholar]
  39. Navvis. The Navvis M3 Trolley. Available online: https://www.navvis.com/m3 (accessed on 21 November 2018).
Figure 1. The indoor mapping device: (a) Picture; (b) frames in the device.
Figure 1. The indoor mapping device: (a) Picture; (b) frames in the device.
Sensors 18 04371 g001
Figure 2. The schematic of scanning the cuboid-shaped corridor with laser rangefinders (LRFs): (a) The 2D view of the scanned parallelogram by a single LRF; (b) the 3D view of the two scanned parallelograms scanned by two LRFs.
Figure 2. The schematic of scanning the cuboid-shaped corridor with laser rangefinders (LRFs): (a) The 2D view of the scanned parallelogram by a single LRF; (b) the 3D view of the two scanned parallelograms scanned by two LRFs.
Sensors 18 04371 g002
Figure 3. Calibration procedure.
Figure 3. Calibration procedure.
Sensors 18 04371 g003
Figure 4. The schematic of line sorting. The solid red circles denote the center points of the lines.
Figure 4. The schematic of line sorting. The solid red circles denote the center points of the lines.
Sensors 18 04371 g004
Figure 5. The schematic of the generating of all corridor observation candidates.
Figure 5. The schematic of the generating of all corridor observation candidates.
Sensors 18 04371 g005
Figure 6. The schematic of assessing the coplanar lines by the volume of the tetrahedron. The red points are the end points of the black lines, and the transparent surfaces are the tetrahedron drawn by two end points. (a) A bad coplanar line pair, and the volume of the tetrahedron they form is 1 / 6 ; (b) a good coplanar line pair, and the volume of the tetrahedron they form is 0.
Figure 6. The schematic of assessing the coplanar lines by the volume of the tetrahedron. The red points are the end points of the black lines, and the transparent surfaces are the tetrahedron drawn by two end points. (a) A bad coplanar line pair, and the volume of the tetrahedron they form is 1 / 6 ; (b) a good coplanar line pair, and the volume of the tetrahedron they form is 0.
Sensors 18 04371 g006
Figure 7. Error distribution of simulation based on different rotation operation.
Figure 7. Error distribution of simulation based on different rotation operation.
Sensors 18 04371 g007
Figure 8. Error distribution of simulation with different numbers of observations based on operation B.
Figure 8. Error distribution of simulation with different numbers of observations based on operation B.
Sensors 18 04371 g008
Figure 9. Error distribution of simulation with different corridor sizes based on operation B.
Figure 9. Error distribution of simulation with different corridor sizes based on operation B.
Sensors 18 04371 g009
Figure 10. Experiment scene A.
Figure 10. Experiment scene A.
Sensors 18 04371 g010
Figure 11. The line detection results and the selected correct corridor observations. Left-top, right-top, and left-bottom are the line detection results of LRF1, LRF2, and LRF3, respectively, and the sequence of the lines are plotted by red, green, blue, and magenta. Right-bottom is the corresponding correct corridor observation based on the initial poses (from the view of the corridor direction). The lines at the same surface are plotted by the same color, and the order of colors is still red, green, blue, and magenta.
Figure 11. The line detection results and the selected correct corridor observations. Left-top, right-top, and left-bottom are the line detection results of LRF1, LRF2, and LRF3, respectively, and the sequence of the lines are plotted by red, green, blue, and magenta. Right-bottom is the corresponding correct corridor observation based on the initial poses (from the view of the corridor direction). The lines at the same surface are plotted by the same color, and the order of colors is still red, green, blue, and magenta.
Sensors 18 04371 g011
Figure 12. The comparison of the sum of the volume of tetrahedrons in an incorrect corridor observation and in a correct corridor observation. (a,b) The different views of the tetrahedrons based on an incorrect corridor observation; (c,d) the different views of the tetrahedrons based on a correct corridor observation. The lines on the same surface are plotted by the same color. It should be noted that there was no tetrahedron when there were fewer than two lines on a surface.
Figure 12. The comparison of the sum of the volume of tetrahedrons in an incorrect corridor observation and in a correct corridor observation. (a,b) The different views of the tetrahedrons based on an incorrect corridor observation; (c,d) the different views of the tetrahedrons based on a correct corridor observation. The lines on the same surface are plotted by the same color. It should be noted that there was no tetrahedron when there were fewer than two lines on a surface.
Sensors 18 04371 g012
Figure 13. The convergence trajectory of the calibration parameters.
Figure 13. The convergence trajectory of the calibration parameters.
Sensors 18 04371 g013
Figure 14. Comparison of the fused point cloud before and after calibration. (a,c) The different views of the fused point cloud before the calibration; (b,d) the different views after the calibration. The points from LRF1, LRF2, and LRF3 are plotted by red, green, and blue, respectively.
Figure 14. Comparison of the fused point cloud before and after calibration. (a,c) The different views of the fused point cloud before the calibration; (b,d) the different views after the calibration. The points from LRF1, LRF2, and LRF3 are plotted by red, green, and blue, respectively.
Sensors 18 04371 g014
Figure 15. Experiment scene B.
Figure 15. Experiment scene B.
Sensors 18 04371 g015
Figure 16. The calibration result distribution of LRF2 based on different initial poses.
Figure 16. The calibration result distribution of LRF2 based on different initial poses.
Sensors 18 04371 g016
Figure 17. The calibration result distribution of LRF2 based on the same accurate initial pose.
Figure 17. The calibration result distribution of LRF2 based on the same accurate initial pose.
Sensors 18 04371 g017
Table 1. Characteristics of the HOKUYO UTM-30LX 2D laser rangefinder.
Table 1. Characteristics of the HOKUYO UTM-30LX 2D laser rangefinder.
Detection Range σ Angular ResolutionMeasurement ResolutionField of ViewScan Speed
0.1–60 m0.03 m0.25°0.001 m270°25 ms
Table 2. List of rotation operations under simulation: “sin()” is the sine function, and “rand(m,n)” is a function to generate an m-by-n matrix with numbers within [0, 1].
Table 2. List of rotation operations under simulation: “sin()” is the sine function, and “rand(m,n)” is a function to generate an m-by-n matrix with numbers within [0, 1].
Operation Namet SequencePitch Sequence (°)Roll Sequence (°)Yaw Sequence (°)
A1, 2, …, 3600 × t0 × tt
B1, 2, …, 3600 × t + 450 × tt
C1, 2, …, 360sin(4 × t) × 45 + 450 × tt
D1, 2, …, 360sin(4 × t) × 45 + 45sin(4 × t) × 45 + 45t
E1, 2, …, 360(360 − t) × 45/3600 × tsin(4 × t) × 90
F (random pose)-rand(360,1) × 360rand(360,1) × 360rand(360,1) × 360
Table 3. Calibration results and their deviations based on scene A.
Table 3. Calibration results and their deviations based on scene A.
ItemRotation (°)Rotation Dev. (°)Translation (mm)Translation Dev. (mm)
No. 1: LRF2−79.28, −0.63, −29.840.19, 0.05, 0.0593.23, 97.27, −405.350.55, 6.85, 3.29
No. 2: LRF2−78.31, −1.11, −29.570.26, 0.14, 0.1094.79, 91.65, −405.150.57, 4.51, 3.27
No. 3: LRF2−78.82, −0.84, −29.720.43, 0.23, 0.1493.42, 99.60, −406.790.98, 5.81, 3.73
No. 1: LRF385.16, 1.14, −135.720.15, 0.04, 0.03−173.31, 59.24, −622.661.76, 3.04, 1.45
No. 2: LRF385.28, 1.24, −135.630.20, 0.05, 0.04−171.33, 54.08, −624.872.33, 3.73, 1.64
No. 3: LRF385.46, 1.27, −135.550.14, 0.06, 0.05−169.73, 52.64, −624.381.73, 2.44, 1.50
Table 4. Calibration results and their deviations based on scene B.
Table 4. Calibration results and their deviations based on scene B.
ItemRotation (°)Rotation Dev. (°)Translation (mm)Translation Dev. (mm)
No. 1: LRF290.31, 1.86, −157.870.31, 0.10, 0.05−97.7, 208.93, −1355.901.30, 6.96, 3.54
No. 2: LRF290.24, 1.80, −157.720.27, 0.08, 0.02−90.80, 211.66, −1350.001.56, 6.61, 4.67
No. 3: LRF289.80, 2.05, −158.550.39, 0.13, 0.01−92.66, 212.95, −1350.501.68, 10.30, 5.00
No. 1: LRF385.51, 1.18, 162.560.18, 0.06, 0.05103.97, 212.24, −987.950.48, 2.11, 2.09
No. 2: LRF386.16, 1.67, 162.990.16, 0.06, 0.04109.41, 211.43, −975.190.42, 1.92, 2.83
No. 3: LRF386.38, 1.53, 162.440.51, 0.13, 0.05104.68, 205.66, −976.371.30, 6.96, 3.54

Share and Cite

MDPI and ACS Style

Yin, D.; Liu, J.; Wu, T.; Liu, K.; Hyyppä, J.; Chen, R. Extrinsic Calibration of 2D Laser Rangefinders Using an Existing Cuboid-Shaped Corridor as the Reference. Sensors 2018, 18, 4371. https://doi.org/10.3390/s18124371

AMA Style

Yin D, Liu J, Wu T, Liu K, Hyyppä J, Chen R. Extrinsic Calibration of 2D Laser Rangefinders Using an Existing Cuboid-Shaped Corridor as the Reference. Sensors. 2018; 18(12):4371. https://doi.org/10.3390/s18124371

Chicago/Turabian Style

Yin, Deyu, Jingbin Liu, Teng Wu, Keke Liu, Juha Hyyppä, and Ruizhi Chen. 2018. "Extrinsic Calibration of 2D Laser Rangefinders Using an Existing Cuboid-Shaped Corridor as the Reference" Sensors 18, no. 12: 4371. https://doi.org/10.3390/s18124371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop