Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pre-calibration-free 3D shape measurement method based on fringe projection

Open Access Open Access

Abstract

This paper presents a pre-calibration-free 3D shape measurement method based on fringe projection. Unlike ordinary methods, it performs calibration and 3D shape measurement concurrently. The captured phase-coded fringe images are utilized to obtain homogenous control points from two camera viewpoints, and the rough 3D structure of these points can be retrieved. Further, a constrained non-linear least-squares optimization model is established to determine the in situ geometry of the optical components, and then, the 3D scene is reconstructed. This method provides an accurate 3D shape measurement capability even during disturbance of the optical geometry. Moreover, not requiring a preliminary calibration process makes the system ultra-flexible. The performance of this method was verified by experiments.

© 2016 Optical Society of America

1. Introduction

The fringe projection based 3D shape measurement technique has been extensively investigated due to its accurate and fast 3D shape acquisition capability [1–3]. It projects phase-coded fringe patterns onto the measured surfaces using a digital light processing (DLP) projector, and one or two cameras capture the distorted fringe images synchronously. Once the phase information is extracted from the captured images, the 3D shape can be retrieved from the geometric description of the optical components, and it commonly contains intrinsic parameters such as the focal length, principle point, and lens distortion in addition to extrinsic parameters such as the rigid pose of the cameras. Typically, determination of these geometric descriptions, or calibration, is crucial for measurement accuracy. Although benefitting from advanced calibration methods [4–7], the measurement accuracy can be better than 1/10000 of the length of the measuring field. However, such high accuracy can only be achieved if the geometry of optical components is stable over the time between calibration and measurement. In practical use, this requirement is difficult to satisfy because of drift of the optical geometry and external disturbances such as temperature variation, vibrations, or mechanical shocks. Thus, achieving sustained accurate 3D shape measurements remains challenging.

In the ordinary method, the calibration is performed prior to measurement. To achieve accurate 3D shape reconstruction, the geometry of the optical components in calibration must be consistent with that in measurement, and thus, many commercial devices require that the calibration and measurement should be implemented after a pre-heating process to allow the system to reach a stable operating temperature. However, it might still be disturbed by unexpected environmental changes. Zhang et al. [8] focused on a fast calibration by a specified target with the aim of enhancing the robustness of the system, but the disturbances would still occur during measurements, and frequently performing the calibration is an elaborate task in on-line 3D measurements. Other researchers attempted to perform the calibration and measurement simultaneously, which is termed the self-calibration technique [9–12]. This seems to be the optimal solution, but the calibration results strongly depend on the quality of the homogenous feather points chosen from the measured scene [10]. Therefore, high-precision measurement requires the use of special targets with accurately designed feathers [11, 12]. Instead of using targets, the fringe projection method can provide lots of accurate homogenous image points using phase information. Based on this advantage, Bräuer-Burchardt et al. [13] proposed a new methodology for system re-calibration and compensation of errors caused by thermal variation of sensor and mechanical influences. Yin et al. [14] proposed a calibration approach based on a stereo vision model for the fringe projection profilometry scheme, in conjunction with the bundle adjustment strategy, and this improved the measurement accuracy. Xiao et al. [15] proposed a robust self-calibration 3D shape measurement method for use in fringe projection photogrammetry. Zhao et al. [16] combined phase-based matching and the bundle adjustment to perform real-time calibration of in situ relative rigid motion between two cameras in accurate large-scale metrology. These studies generally required that the optical geometry was preliminarily calibrated for the following optimization computation, and this may restrict their applications in some cases when the preliminary calibration is inconvenient to perform.

This paper presents a pre-calibration-free 3D shape measurement method to achieve sustained accurate and ultra-flexible 3D shape measurements based on fringe projection. Unlike previous methods, it directly uses measured fringe images to perform calibration of the optical geometry and 3D shape measurement concurrently, so a preliminary calibration is not necessary. During the measurement, the DLP projector will project phase-coded sinusoidal fringes along the horizontal and vertical axes of the projector image frame on to the surfaces, and two cameras synchronously capture the distorted fringe images. The corresponding phase maps of the two cameras are calculated by phase shifting and a multi-frequency heterodyne algorithm [17], and thus lots of homogenous image points can be selected between camera views using phase information. As the initial intrinsic parameters of each camera are unknown, the initial values can be roughly deduced from nominal parameters supplied by the camera and lens manufacturer. These figures may not be completely accurate, but are sufficient to retrieve a rough relative rigid motion between cameras and the 3D structure of homogenous control points. Furthermore, a non-linear least-squares optimization model that minimizes the difference between the control points and re-projection of 3D points in both cameras is built to refine the complete optical geometry of the system. However, minimizing the re-projection error only constrains that the space point, image point and optical center should be co-linear; thus, it cannot obtain a correct metric 3D reconstruction of the stereo rig unless the intrinsic parameters of the cameras are accurately known [18]. To solve this problem, we split the optimization into two steps: 1) First, only the relative rigid motion of the cameras and 3D coordinates of control points are optimized; 2) Second, the intrinsic parameters of the cameras are added into the iterative refinement computation with two additional constraints. In this way, the in situ optical geometry along with the 3D structure of the control points can be determined, and thus the point cloud of the measured scene can be computed.

To the best of our knowledge, this optimization technique is different from the existing methods [6, 13–16]. The method given in [6] presented an optimization model by minimizing the metric distance error between the reconstructed points through triangulation and the reference points. The method given in [13] used an optical geometry optimization model based on the epipolar constraint. The methods in [14], and [15] used the collinear constraint for optimization in a similar manner to us, but they required that the measurement system should observe the object or target from at least three different viewpoints, and thus the general bundle adjustment based optimization could be implemented. In the proposed method, the measurement system can only observe the object from a single viewpoint. Although the optimization in [16] also allowed the system to observe the object from a single viewpoint, it required the intrinsic parameters to be accurately calibrated and kept fixed during the iterative computation.

The main advantages of this method can be embodied by the following two aspects. Firstly, it essentially uses measured images to determine the in situ geometric description of optical components so that it can promise a sustained accurate 3D shape measurement even in the presence of continuous disturbance of the optical geometry. This ability will have a huge application potential in on-line 3D measurements. Secondly, the removal of the need for a preliminary calibration process leads to the measurement method being ultra-flexible. For example, different measurement dimensional ranges can be achieved by quickly and freely configuring the baseline length, measurement distance, and imaging focus of the two cameras, and then simply performing the measurements. Thus, this method will play an important role when a preliminary calibration or use of a calibration target is inconvenient.

2. Measurement principles

2.1 Phase computation and matching

In this method, a series of sinusoidal fringes along the horizontal and vertical axes of projector image frame with constant phase shifting are projected onto a target object. Two cameras then capture the fringe images synchronously. The captured camera images can be expressed as:

In(x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)+δn],
where (x,y) are the pixel coordinates that can be omitted in the following expression, In is the recorded intensity, A is the average intensity, B is the modulation intensity, δn is the constant phase-shift, and ϕ is the desired phase information that can be solved by the least-squares algorithm:
ϕ=arctan(n=1NInsin(δn)/n=1NIncos(δn)).
Correspondingly the ratio of B and A can indicate the signal-to-noise ratio (SNR) of the current pixel:
γ=B/A=2{[n=1NInsin(δn)]2+[n=1NIncos(δn)]2}0.5/n=1NIn.
The arctangent function in Eq. (2) will result in a value range of [π,π] with 2π discontinuities. In this method, the multi-frequency heterodyne technology [17] is used to construct two continuous phase-maps of images with two direction fringes. Thus, the correspondence between the camera views can be established, as shown in Fig. 1. In the ordinary method, the 3D point cloud can be reconstructed according to the preliminary calibration results, and the proposed method will firstly utilize some homogenous control points to estimate the optical geometry of the system.

 figure: Fig. 1

Fig. 1 Fringe projection and phase matching.

Download Full Size | PDF

Precisely and robustly detecting homogenous control points is of paramount importance for accurate 3D scene retrieval in this method, and generally, phase information is more robust to environmental light conditions than intensity information. Thus, a phase-based correlation method is used for homogenous points searching, which is inspired by the classic digital image correlation method [19]. For a given pixel point m1[x1,y1]T in one camera image, its sub-pixel homogenous point m2[x2,y2]T in the other camera image will be searched for according to the following correlation criteria:

c=i=WWj=WW(φij1Vφ¯1V)2(φij1Hφ¯1H)2(φij2Vφ¯2V)2(φij2Hφ¯2H)2i=WWj=WW(φij1Vφ¯1V)2(φij1Hφ¯1H)2i=WWj=WW(φij2Vφ¯2V)2(φij2Hφ¯2H)2,
where φijkV and φijkH are the phase values in phase-maps of images with vertical and horizontal fringes respectively, k represents the camera index, i and j are the row and column indices in a square subset of (2W+1)×(2W+1) pixels centered at point m1 from one camera image and used to track its corresponding location m2 in the other, and ¯ is the average phase value of the squared subset. To achieve accurate sub-pixel location, the non-linear least-squares iterative optimization method is employed during the searching process.

2.2 Rough 3D reconstruction of homogenous control points

The collected homogenous control points encapsulate all geometric information between cameras, and our goal is first to retrieve the 3D structure of these points at a low precision level. In this method, the cameras are modeled by a standard pin-hole model, as an arbitrary 3D point M in the world coordinate system is denoted by Mw, while it is written as Mci if in the i-th camera coordinate system. The rays departing from M pass though the camera lens and then are captured on the i-th camera sensor plane to form the image point mi. In practice, the lens aberrations will distort the shape of the images, and this imaging process contains a non-linear camera model [20]:

{sim˜i=Ai[Ri|ti]M˜wmi=mi+θ(ki;mi),withAi=[axi0ui0ayivi001],
where ˜ is the homogenous coordinate, si is a scale factor, mi is the ideal non-distortion image point, Ai is the intrinsic camera matrix that contains normalized horizontal and vertical direction focal lengths (axi,ayi) and principle point coordinates (ui,vi) in the image coordinate system, and θ(ki;) is the lens distortion parameterized by the distortion coefficients ki. Ri and ti are the rigid rotation matrix and the translation vector, respectively, which can be also termed extrinsic parameters. In the stereo rig system, the cameras are fixed with respect to each other, and the rotation matrix R and translation vector t can be introduced to represent the relative rigid motion between cameras, and are defined by:
{R=R2R11t=t2R2R11t1.
As two perspective images of a single rigid scene are related by epipolar geometry, which is described by a 3 × 3 fundamental matrix F, and it contains all geometric information between the two views including intrinsic parameters and relative rigid motion:
m˜2TFm˜1=0,withF=A2T[t]xRA11,
where [t]x is the anti-symmetric matrix defined by t such that [t]xy=t×y for any vector y.

In order to obtain rigid motion information, the cameras are assumed to be an ideal perspective imaging system, and the distortion coefficients ki are set to 0, thus we have m˜=m˜, and the fundamental matrix F can be estimated by a classic 8-point algorithm and iterative optimization based on minimizing the distances between points and epipolar-lines. Furthermore, the rough initial guess of intrinsic parameters in Ai can be deduced from the nominal camera and lens parameters. For example, the normalized horizontal and vertical direction focal length (axi,ayi) in Ai can be initialized by axi=fi/dxi and ayi=fi/dyi, where fi is the nominal focal length, dxi and dyi are the horizontal and vertical pixel size of the camera sensor, respectively. The principle point coordinate (ui,vi) is typically assumed to be at the center of the camera image. Although the initial intrinsic matrix is not very accurate, it is sufficient to obtain the so-called essential matrix E=[t]xR according to Eq. (7); furthermore, the relative rigid motion of the cameras can be computed at a low precision level by singular-value-deposition of E [18].

In particular, estimation of a fundamental matrix is dependent on the homogenous points, which should represent variation of the image owing to different viewpoints, and it is often suggested that evenly distributed homogenous points would always promise a correct solution [21]. To select well-distributed homogenous control points, a virtual grid is drawn in one image plane and the grid vertices will be initially chosen as candidate control points as shown in Fig. 2. Before searching for homogenous points, the quality of each candidate point should be evaluated according to the SNR computed by Eq. (3), because unexpected conditions such as high reflectance surfaces, shading, and noise might cause large phase errors and lead to serious correspondence error, and thus the low quality candidates will be removed.

 figure: Fig. 2

Fig. 2 Homogenous control points selection.

Download Full Size | PDF

After retrieving the relative rigid motion between cameras, one camera coordinate system will be taken as a reference, and the 3D structure of the homogenous control points could be reconstructed by a least-squares solution according to Eq. (8):

{s1m˜1=A1[I|0]M˜c1s2m˜2=A2[R|t]M˜c1,
where I is the 3 × 3 identity matrix.

2.3 Constrained non-linear least-squares optimization

The computation method in the previous section can only obtain the coarse 3D structure of homogenous points owing to the rough guess of intrinsic camera parameters. In order to gain an accurate solution, the non-linear least-squares optimization method is used to refine the optical geometry (including the intrinsic camera matrix, lens distortion, and relative rigid motion between cameras) together with the 3D structure of homogenous control points. The corresponding cost function is built according to discrepancies between actually collected control points and the expected re-projection data:

Cst=i=1Lm1im^1i(A1,k1,Mc1)2+i=1Lm2im^2i(A2,k2,R,t,Mc1)2,
where superscript i denotes the sequence number of the control points, L is the total number of homogenous control points, and m^1i and m^2i are the re-projection of Mc1 in the two camera image planes. Typically, the classic Levenberg-Marquardt algorithm can be used to solve this problem. However, according to our tests this optimization has difficulties in converging to a correct solution for the following two reasons. First, the initial optical geometry and 3D structure of the control points are not accurate, and many parameters have to be optimized in the iterative computation, so it is very likely to converge to a local optimal solution. Secondly, the object function in Eq. (9) only has the constraint that the space point, image point, and optical center should be co-linear, but it is appropriate to general projective reconstruction, and is unlikely to guarantee a correct metric 3D reconstruction of the homogenous points [18].

Aiming to solve the first problem, the proposed method divides the optimization into two steps: 1) The intrinsic camera matrixes and lens distortion parameters are fixed, and only relative rigid motion and 3D structure are refined during iterative computation; 2) After obtaining a solution that is close to the correct one, the total parameters are optimized simultaneously. As for the second problem, we add two additional constraints during the iterative optimization. First, the ratio of horizontal and vertical normalized focal length μf=axi/ayi in each camera matrix Ai should be kept constant; second, the ratio of normalized focal length of two cameras μc=ax1/ax2 should be kept constant. The use of these two constraints helps the non-linear least-squares optimization to converge at the global optimal solution.

After the optimization, the translation vector and the 3D coordinates of control points are given in a metric reconstruction coordinate system, and they need to be transformed into the Euclid coordinate system using a scale factor. If the distance of any two control points in the Euclid coordinate system is known, it can be used to compute the scale factor.

We have now obtained the accurate intrinsic camera matrixes, lens distortion coefficients, and relative rigid motion of the two cameras. Thus, the captured fringe images can now be used to compute a dense and accurate 3D point cloud of the measured scene as the ordinary method.

2.4 Summary of measurement procedures

The proposed 3D measurement procedures are shown in Fig. 3 and can be summarized as the following steps:

 figure: Fig. 3

Fig. 3 Diagram of measurement procedure.

Download Full Size | PDF

Step 1: Fringe image capture and phase matching. After capturing images with horizontal and vertical fringes, the phase maps are computed by phase-shifting and multi-frequency heterodyne technology. A set of initial candidate control points are then selected in one camera image, and then their homogenous points in the other camera image can be determined by a phase-based correlation method.

Step 2: Relative rigid motion estimation and 3D reconstruction of homogenous points. According to the collected homogenous control points, the fundamental matrix of the two camera views can be estimated. As the parameters in intrinsic camera matrix can be roughly deduced from nominal values supplied by the hardware manufacturer, the rough rigid motion between cameras can be computed from Eq. (7), and thus the rough 3D structure of homogenous control points can be retrieved by Eq. (8).

Step 3: Non-linear least-squares optimization. To obtain accurate in situ optical geometry, a constrained non-linear least-squares optimization method is used, and the cost function is built according to discrepancies between actually collected control points and re-projection points from the corresponding reconstructed 3D points.

Step 4: 3D reconstruction of measured scene. After obtaining the accurate intrinsic parameters and relative rigid motion of the two cameras, the 3D structure of the measured scene can be computed as the ordinary method.

3. Experiments and discussions

In order to test the performance of the proposed method, we programmed the experimental system in Fig. 4 using Microsoft Visual Studio 2010 with C + + . The imaging sensor was an 8-bit per pixel, monochrome, Basler acA1300-30gm, Gigabit Ethernet camera with a frame rate of 30 Fps and 1296 × 966 pixel resolution. The camera lens was a Ricoh FL-HC1214-2M with fixed focal length of 12 mm. The projector was composed of a Texas Instruments LighterCrafter4500 board with a 150 ANSI lumens RGB-LED, and it had a maximum frame rate of 120 Fps at a resolution of 912 × 1140 and 8 bits per pixel gray-scale. The camera and projector were synchronized by an external triggering circuit at frame rate of 30 Fps. Our experiments included: (1) Measuring a ceramic ball bar constantly during the heating-up process of the system, and evaluating the accuracy and stability of the proposed method; (2) measuring objects with different dimensional ranges to illustrate the measurement flexibility; and (3) measuring objects that are challenging for the fringe projection method to demonstrate the performance of the proposed method.

 figure: Fig. 4

Fig. 4 Measurement scene.

Download Full Size | PDF

3.1 Accuracy and stability test

First, a ceramic ball bar with an accurately known separation of the sphere centers was measured every 3 min from the system start-up time for 2 h, with 41 measurements collected. As the cameras were heating up from cold to working temperature status, the optical geometry of system was drifting slightly and continuously. During each measurement, both the proposed method and ordinary method were used to retrieve the 3D point cloud from the same grabbed images. Thus, the measurement accuracy and stability of these two methods can be compared according to the measurement error, which is defined by the difference between the measured length of the two sphere centers and the known value. For the ordinary method, the optical geometry (including intrinsic and extrinsic parameters) of the system was calibrated by the classic planar target based calibration method [22] when the system reached operating temperature, and these calibrated parameters were used to reconstruct the 3D point cloud of every measurement.

The experimental scene is shown in Fig. 4. The measurement range was approximately 240 mm in width, 180 mm in height, and 200 mm in depth with a measurement distance of approximately 650 mm. The standard separation of the two sphere centers was 149.9327 mm. A scale bar, on which the length of ring coded circle centers was accurately calibrated, was placed in the scene to facilitate the transformation of the metric reconstruction coordinate system to the Euclid coordinate system. In both measurement methods, a 4-step phase-shifting method was used to compute the phase value of the camera views. In the proposed method, 30 row and 40 column grid candidate control points were selected in one camera view, and their homogenous control points in the other camera view were determined by phase matching and quality maps as described in Section 2.1. The distribution of selected homogenous control points are shown in Fig. 5(a), and Fig. 5(b) gives the corresponding 3D point cloud obtained by the proposed method.

 figure: Fig. 5

Fig. 5 Accuracy and stability test: (a) Homogenous control points selection. (b) Reconstructed 3D point cloud.

Download Full Size | PDF

The test results are shown in Fig. 6. It can be seen that the measurement error of the ordinary method was –0.01 mm at system start-up, and increased constantly as the system was heating up. The error was then stable at ~0.07 mm when the system was at operating temperature after about 30 min. The absolute maximum error and standard deviation (STD) of the ordinary method were 0.072 mm and 0.017 mm, respectively. Additionally, the distance of the ring coded circle centers in each measurement was also computed using the ordinary method and the corresponding error variation tendency shown in Fig. 6 was consistent with that of the length error of the sphere centers. This verified that the optical geometry of the system was indeed drifting continuously owing to the warming of electronic components in the cameras, and thus the ordinary method generally requires that calibration and measurement should be both performed after the system reaches operating temperature, to make sure that the measurement results are credible. The slightest disturbance of the optical geometry therefore requires re-calibration.

 figure: Fig. 6

Fig. 6 Measurement error comparisons.

Download Full Size | PDF

In comparison, the proposed method achieved nearly the same error distribution in both the heating up stage and working temperature stage, with an absolute maximum error and STD of 0.062 mm and 0.005 mm, respectively. The STD values demonstrate that the proposed method achieved more stable measurement results even if the optical geometry of measurement system was drifting continuously, but it did not achieve better performance in terms of the absolute maximum error. This is because the camera lenses were identical and the camera sensor pixel was square, thus, the ratio of the horizontal and vertical normalized focal lengths in each camera and the ratio of the two cameras’ normalized focal lengths were both rationally assumed to be 1 during the optimization computation step. However, due to manufacturing errors, the actual two ratios would be very close to but not exactly equal to 1; thus, the accuracy of optical geometry estimation in the iterative optimization would decrease. In order to prove this inference, the initial intrinsic parameters of cameras were calibrated by the classic planar target based calibration method [22] before the iterative optimization computation in the proposed method, and the error of each measurement was also computed and given in Fig. 6. The corresponding maximum absolute error and STD became 0.028 mm and 0.005 mm, respectively, which showed that the measurement accuracy was increased. From the above results, the measurement accuracy of proposed method in its current form is sufficient to satisfy many 3D shape measurement applications such as reverse engineering, heritage conservation, virtual reality, and entertainment. If extremely high measurement accuracy was required, then the cameras would require only a single calibration before continuous measurements.

3.2 Measurement flexibility test

The following experiment demonstrates the ultra-flexible measurement capability of this method. At present, the ordinary method requests that the size of calibration target should be in accordance with the measurement range to obtain the accurate calibration results, but in practice, it is very expensive to manufacture a very large or small calibration target, and it is often inconvenient to perform the calibration at the measurement location. In the proposed method, the lack of a necessity for pre-calibration means that the baseline length of the cameras, measurement distance, and focus of imaging can be configured freely and quickly to accommodate different measurement ranges. In this test, two statues were measured in the large scale and small scale ranges. Figure 7(a) shows the large scale measurement scene. The camera base line was set to ~400 mm with a measurement distance of 1000 mm, giving a measurement range of approximately 360 mm in width and 270 mm in height. The measured 3D point cloud is shown in Fig. 7(b). Figure 7(c) illustrates the small scale measurement scene, with a base line of about 120 mm and a measurement distance of 250 mm, with the measurement range being approximately 80 mm in width and 60 mm in height. The finely detailed 3D shape of the dragon statue shown in Fig. 7(d) was obtained. Thus, the proposed method can be used to develop an adaptive measurement system to cope with different measurement range demands.

 figure: Fig. 7

Fig. 7 Measurement flexibility test: (a) Measurement scene of large scale range; (b) 3D point cloud of large scale range; (c) Measurement scene of small scale range; (d) 3D point cloud of small scale range.

Download Full Size | PDF

3.3 Measurement test of various objects

The measurement accuracy of the proposed method is reliant on the coordinate accuracy of the homogenous control points built by phase matching, and unexpected conditions such as global illumination, inter-reflections, subsurface scattering, and image noise will lead to phase error, and decrease the evaluation accuracy of the measurement system’s optical geometry in the proposed method. Thus, this test is designed to illustrate the performance of the proposed method in measuring objects that are challenging for the fringe projection method. The ordinary method was also used to compute the 3D point clouds using the same captured fringe images to quantitatively analyze measurement error, and they were taken as reference to compare with the ones obtained by proposed method.

Figure 8 shows the objects, reconstructed 3D point clouds, and 3D comparisons. A packaging bag with a richly textured surface is shown in Fig. 8(a), with the variations in gray level over the surface often resulting in phase error. This can be observed in the reconstructed 3D point cloud, because the 3D shape of the bag surface is actually very smooth, but the 3D point cloud in the richly textured area is uneven. The 3D comparison result also verifies the impact of phase error.

 figure: Fig. 8

Fig. 8 Measurement test of various objects: (a) Measurement of packaging bag with richly textured surface; (b) Measurement of manufactured component with highly reflective surface; (c) Measurement of grape with translucent surface.

Download Full Size | PDF

A metal component with a highly reflective surface was measured as shown in Fig. 8(b). This type of surface contains high and low reflectivity areas, and causes the images to be over exposed or noisy, and thus, phase error is inevitable. In the point cloud, the data in some small high reflectivity areas is missing and the data in low reflectivity areas is noisy. According to 3D comparison, the error of most areas is within ± 0.1 mm.

The test involving measuring grapes is shown in Fig. 8(c). Measuring grape is very challenging for the fringe projection method and other structured light methods because it has the translucent surface that causes strong sub-surface scattering and inter-reflection phenomenon. Thus, the phase error is very large and the computed 3D point cloud is noisy. The 3D comparison indicates that the error is much larger than that of measuring the previous two objects.

According to above experimental results, the phase computation accuracy is critical for the proposed method, and different reflection properties of measured surfaces will introduce phase error. In recent years, many researchers focused on separating different reflection components of images to improve the accuracy of phase computation [23, 24], and these studies will be used as valuable suggestions to refine the accuracy of our proposed method in our future work.

4. Conclusions

This paper proposes a novel pre-calibration-free 3D shape measurement method based on the fringe projection method. It directly uses projected phase-coded images to obtain homogenous control points from two camera viewpoints, and retrieves the rough 3D structure of these points. Furthermore, a constrained non-linear least-squares optimization model has been established to refine the in situ geometry of the optical components along with the 3D structure of the control points, allowing the 3D shape of measured scene to be accurately computed as the ordinary method. According to the experiments, this method can promise a sustained accurate 3D shape measurement capability even in the presence of continuous optical geometry drift and disturbance. Moreover, it makes the system ultra-flexible owing to the non-requirement of a preliminary calibration process.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 51505169), Major Program of Science and Technology Planning Project of Hubei Province (No. 2013AEA003).

References and links

1. B. W. Li, Y. J. Wang, J. F. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54, 236–246 (2014). [CrossRef]  

2. K. Zhong, Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Z. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. 51(11), 1213–1222 (2013). [CrossRef]  

3. Z. W. Li, K. Zhong, Y. F. Li, X. H. Zhou, and Y. S. Shi, “Multi-view phase-shifting: a high-speed and full-resolution 3D measurement framework for arbitrary shape dynamic object,” Opt. Lett. 38(9), 1389–1391 (2013). [CrossRef]   [PubMed]  

4. Z. W. Li, Y. S. Shi, C. J. Wang, and Y. Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

5. M. Vo, Z. Y. Wang, B. Pan, and T. Y. Pan, “Hyper-accurate flexible calibration technique for fringe-projection-based three-dimensional imaging,” Opt. Express 20(15), 16926–16941 (2012). [CrossRef]  

6. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014). [CrossRef]   [PubMed]  

7. Z. Wang, “Removal of noise and radial lens distortion during calibration of computer vision systems,” Opt. Express 23(9), 11341–11356 (2015). [CrossRef]   [PubMed]  

8. Z. Zhang, H. Ma, S. Zhang, T. Guo, C. E. Towers, and D. P. Towers, “Simple calibration of a phase-based 3D imaging system based on uneven fringe projection,” Opt. Lett. 36(5), 627–629 (2011). [CrossRef]   [PubMed]  

9. T. Dang, C. Hoffmann, and C. Stiller, “Continuous stereo self-calibration by camera parameter tracking,” IEEE Trans. Image Process. 18(7), 1536–1550 (2009). [CrossRef]   [PubMed]  

10. P. Garbacz and W. Mizak, “A novel approach for automation of stereo camera calibration process,” Pomiary Automatyka Robotyka 17(2), 234–238 (2013).

11. D. H. Xu, Q. Q. Zeng, H. J. Zhao, C. Z. Guo, K. Kidono, and Y. Kojima, “Online stereovision calibration using on-road markings,” in Proceedings of IEEE Conference on Intelligent Transportation Systems (IEEE, 2014), pp. 245–252.

12. C. Bräuer-Burchardt, P. Kühmstedt, and G. Notni, “Calibration of stereo 3D scanners with minimal number of views using plane targets and vanishing points,” in Proceedings of Computer Analysis of Images and Patterns: Part II (Academic, 2015), pp. 61–72.

13. C. Bräuer-Burchardt, P. Kühmstedt, and G. Notni, “Error compensation by sensor re-calibration in fringe projection based optical 3D stereo scanners,” in Proceedings of International Conference on Image Analysis and Processing Part II (Academic, 2011), pp. 363–373. [CrossRef]  

14. Y. Yin, X. Peng, A. Li, X. Liu, and B. Z. Gao, “Calibration of fringe projection profilometry with bundle adjustment strategy,” Opt. Lett. 37(4), 542–544 (2012). [CrossRef]   [PubMed]  

15. Y. L. Xiao, J. Xue, and X. Su, “Robust self-calibration three-dimensional shape measurement in fringe-projection photogrammetry,” Opt. Lett. 38(5), 694–696 (2013). [CrossRef]   [PubMed]  

16. H. J. Zhao, Z. Wang, H. Z. Jiang, Y. Xu, and C. Dong, “Calibration for stereo vision system based on phase matching and bundle adjustment algorithm,” Opt. Lasers Eng. 68, 203–213 (2015). [CrossRef]  

17. C. E. Towers, D. P. Towers, and J. D. C. Jones, “Absolute fringe order calculation using optimized multi-frequency selection in full-field profilometry,” Opt. Lasers Eng. 43(7), 788–800 (2005). [CrossRef]  

18. R. I. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University Press, 2004).

19. B. Pan, “Recent progress in digital image correlation,” Exp. Mech. 51(7), 1223–1235 (2011). [CrossRef]  

20. J. Heikkilä and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 1106–1112. [CrossRef]  

21. J. K. Seo, H. K. Hong, C. W. Jho, and M. H. Choi, “Two quantitative measures of inlier distributions for precise fundamental matrix estimation,” Pattern Recognit. Lett. 25(6), 733–741 (2004). [CrossRef]  

22. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

23. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “A practical approach to 3D scanning in the presence of inter-reflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102(1-3), 33–55 (2013). [CrossRef]  

24. M. O’Toole, J. Mather, and K. N. Kutulakos, “3D shape and indirect appearance by structured light transport,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3246–3253. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Fringe projection and phase matching.
Fig. 2
Fig. 2 Homogenous control points selection.
Fig. 3
Fig. 3 Diagram of measurement procedure.
Fig. 4
Fig. 4 Measurement scene.
Fig. 5
Fig. 5 Accuracy and stability test: (a) Homogenous control points selection. (b) Reconstructed 3D point cloud.
Fig. 6
Fig. 6 Measurement error comparisons.
Fig. 7
Fig. 7 Measurement flexibility test: (a) Measurement scene of large scale range; (b) 3D point cloud of large scale range; (c) Measurement scene of small scale range; (d) 3D point cloud of small scale range.
Fig. 8
Fig. 8 Measurement test of various objects: (a) Measurement of packaging bag with richly textured surface; (b) Measurement of manufactured component with highly reflective surface; (c) Measurement of grape with translucent surface.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

I n (x,y)=A(x,y)+B(x,y)cos[ϕ(x,y)+ δ n ],
ϕ=arctan( n=1 N I n sin( δ n ) / n=1 N I n cos( δ n ) ).
γ=B/A = 2 { [ n=1 N I n sin( δ n ) ] 2 + [ n=1 N I n cos( δ n ) ] 2 } 0.5 / n=1 N I n .
c= i=W W j=W W ( φ ij1 V φ ¯ 1 V ) 2 ( φ ij1 H φ ¯ 1 H ) 2 ( φ ij2 V φ ¯ 2 V ) 2 ( φ ij2 H φ ¯ 2 H ) 2 i=W W j=W W ( φ ij1 V φ ¯ 1 V ) 2 ( φ ij1 H φ ¯ 1 H ) 2 i=W W j=W W ( φ ij2 V φ ¯ 2 V ) 2 ( φ ij2 H φ ¯ 2 H ) 2 ,
{ s i m ˜ i = A i [ R i | t i ] M ˜ w m i = m i +θ( k i ; m i ) , with A i =[ a x i 0 u i 0 a y i v i 0 0 1 ],
{ R= R 2 R 1 1 t= t 2 R 2 R 1 1 t 1 .
m ˜ 2 T F m ˜ 1 =0, with F= A 2 T [t] x R A 1 1 ,
{ s 1 m ˜ 1 = A 1 [I|0] M ˜ c 1 s 2 m ˜ 2 = A 2 [R|t] M ˜ c 1 ,
Cst= i=1 L m 1 i m ^ 1 i ( A 1 , k 1 , M c 1 ) 2 + i=1 L m 2 i m ^ 2 i ( A 2 , k 2 ,R,t, M c 1 ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.