Elsevier

Pattern Recognition

Volume 98, February 2020, 107058
Pattern Recognition

Robust one-dimensional calibration and localisation of a distributed camera sensor network

https://doi.org/10.1016/j.patcog.2019.107058Get rights and content

Highlights

  • We improve upon general-motion 1D calibration for multi-view networks.

  • Normalising the projective reconstruction improves linear least squares result.

  • A final bundle adjustment stage greatly improves calibration accuracy.

  • ADMM and Gaussian belief propagation allows a distributed algorithm of high accuracy.

  • Improved algorithms outperform original algorithm and checkerboard calibration.

Abstract

Calibration and localisation of a camera sensor network is an essential requirement for higher-level computer vision tasks, such as mapping or tracking. Additionally, distributed algorithms are being increasingly used to create scalable networks robust to node failure. We propose a distributed calibration and localisation algorithm based on multi-view one-dimensional calibration, alternating direction method of multipliers, and Gaussian belief propagation. Our algorithm builds upon an existing calibration algorithm by improving the numerical conditioning and non-linear refinement. We adapt this to a distributed network, bringing local estimates at each camera node to global consensus. Simulation and experimental results show that our algorithm performs with high accuracy compared to other calibration techniques, in centralised and distributed networks, and is well suited for practical applications.

Introduction

Calibration and localisation of a camera sensor network (CSN) enables higher-level computer vision tasks by allowing metric 3D information to be attained from 2D images. Most multi-view calibration algorithms use a central processor with access to data from all camera nodes, which can encounter communication issues in large CSNs and represents a single point of failure. With low-cost on-board processing becoming increasingly feasible for robotic vision applications, such as networks of micro aerial vehicles (MAVs), distributed processing can ensure robustness and scalability. This promotes the need for a simple calibration and localisation algorithm for distributed CSNs.

The most popular calibration algorithm is Zhang’s ‘2D’ calibration using a planar pattern seen at arbitrary orientations [1] which works on various camera models [2], [3]. Unfortunately, in a large CSN, the pattern self-occludes when viewed at wide angles. Alternatively, ‘self-calibration’ doesn’t use any pattern, instead using structure from motion (SfM) [4], [5]. While suited for large CSNs, it is computationally complex and cannot determine scene scale. This paper utilises ‘1D’ calibration [6], where collinear points are seen at arbitrary orientations. This work also uses Gaussian belief propagation (GaBP), a message passing algorithm for aligning local estimates in a distributed network [7], and alternating direction method of multipliers (ADMM), a method of splitting optimisation problems for distributed processing [8].

The original 1D calibration algorithm was for a single camera and constrained the calibration object to rotate about a fixed point [6], however this was relaxed to allow for planar motion [9]. Accuracy and speed of this algorithm has been improved using normalised image points and a partitioned Levenberg-Marquardt refinement [10], information weighted algorithms [11], and replacing non-linear optimisation with convex relaxation [12]. Full general motion of the calibration object has been achieved in multi-view CSNs, originally requiring a reference camera that was already calibrated [13], and later without this requirement based on vanishing points [14], robust perspective factorisation [15], and then fundamental matrices [16]. Our method improves the accuracy of the latter algorithm using two stages of normalisation and a single global bundle adjustment (BA).

Our method also utilises belief propagation, originally for inference on trees [17], and later extended to loopy graphs [18] and continuous variables [7]. Most distributed calibration algorithms use self-calibration, with Devarajan et al. modelling the CSN as a Markov Random Field (MRF) where each node performs local SfM which is aligned with belief propagation [19]. Tron and Vidal brought pairwise CSN pose estimates to alignment with average consensus on Riemannian manifolds [20]. There is also a range of work on distributed localisation for dynamic robots under changing topologies using methods such as dynamic average consensus [21], fusion of relative measurements [22], robust control laws [23], and Kalman filter-based collaborative localisation [24]. Also, Eriksson et al. have demonstrated splitting BA across processors using proximal splitting, similar to ADMM [25]. Graphical models have wide use in BA [26] and localisation [27], and similar to the distributed case need to ensure consistent poses along graph cycles [28]. Our method uses a combination of ADMM and GaBP to localise over an MRF.

Our motivation was to develop a simple and robust calibration technique for ad-hoc CSNs, particularly for inexpensive MAVs that might be replaced or swapped often, where it could be easily run to provide a quick initialisation to a known scale for a dynamic localisation algorithm such as in [22]. General motion 1D calibration is well suited to a CSN due to the lack of self-occlusion at wide angles, however, such algorithms don’t yet have comparable accuracy to other calibration algorithms. Furthermore, distributed processing is being used to ensure the algorithm is highly scalable.

As a basis for our work we have used the calibration and localisation algorithm of De França et al. [16]. However, we have used normalised image points and projective 3D reconstructions to improve the accuracy of key linear estimations, then introduced a more geometrically meaningful non-linear refinement to replace all non-linear refinement of the previous method. Next, to adapt this algorithm to a distributed network for alignment with ADMM and GaBP we have taken additional steps. We have operated on neighbourhood-based node clusters to perform the calibration separately at each camera. We then used ADMM to split the global bundle adjustment objective into locally computable steps. Finally, we align each iteration of the ADMM process using an application of GaBP that considers frame alignment and an error model.

The paper is organised as follows: Section 2 introduces the preliminaries of the camera model, calibration object, and distributed network. Section 3 describes the 1D calibration algorithm and our improvements, and Section 4 adapts this to a distributed network. Section 5 provides experimental results on simulated and real data, and Section 6 concludes the paper.

Section snippets

The pinhole camera model

Consider a network of K+12 camera nodes with node 0 at the origin of the world coordinate system. Each node k has an intrinsic matrix Ak and extrinsic matrix [Rk|tk], and takes N images of M points, m=[x,y]T, a projection of the 3D world point M=[X,Y,Z]T. These have homogeneous forms m˜=[x,y,1]T and M˜=[X,Y,Z,1]T respectively. The jth image point of the ith image for node k is related to its corresponding world point bym˜kjiAk[Rk|tk]M˜ji,whereAk=[αk0uk0βkvk001].

Here,  ≃  signifies equality to

Multi-view calibration with one-dimensional objects

In this section we briefly describe the multi-view 1D calibration algorithm of de França, then we detail our three improvements to this method. We present a flowchart of the full algorithm in Fig. 3.

Calibrating a distributed camera sensor network

For use in an ad-hoc network, we adapted our improved 1D calibration algorithm to perform distributed processing using a combination of ADMM and GaBP. In this section, we first introduce consensus with ADMM and GaBP, then present our work to adapt the problem to this framework.

Experimental results

We ran simulations to test the effects of noise, pattern displacements, number of calibration points, and compared centralised and distributed algorithms. Then, we verified these results on real images.

Conclusions

We have proposed a 1D calibration and localisation algorithm well-suited for low-cost distributed ad-hoc camera networks. Whilst there are many popular calibration methods, namely 2D and self-calibration, ours has a number of advantages. The lack of self-occlusion of the calibration object suits larger CSNs, the general motion makes it simple to use, and it can determine scene scale. Our contributions were to improve an existing 1D algorithm with better numerical conditioning and global BA. We

Acknowlgedgments

This research has been conducted with the support of the Australian Government Research Training Program Scholarship.

Brendan Halloran obtained his Bachelor of Engineering degree in 2016 from the University of Wollongong, where he is currently pursuing his PhD in distributed processing for robotic vision with the School of Electrical, Computer and Telecommunications Engineering.

References (35)

  • R. Munoz-Salinas et al.

    Spm-slam: Simultaneous localization and mapping with squared planar markers

    Pattern Recognit.

    (2019)
  • R. Munoz-Salinas et al.

    Mapping and localization from planar markers

    Pattern Recognit.

    (2018)
  • R. Hartley et al.

    Multiple view geometry in computer vision

    (2003)
  • Z. Zhang

    A flexible new technique for camera calibration

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2000)
  • S. Ramalingam et al.

    A unifying model for camera calibration

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2017)
  • Q.-T. Luong et al.

    Self-calibration of a moving camera from point correspondences and fundamental matrices

    Int. J.Comput. Vis.

    (1997)
  • F. Vasconcelos et al.

    Automatic camera calibration using multiple sets of pairwise correspondences

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2018)
  • Cited by (14)

    • 3D Object Detection for Autonomous Driving: A Survey

      2022, Pattern Recognition
      Citation Excerpt :

      Sensor calibration undoubtedly increases the difficulty of deployment and directly affects the reliability of the whole system. Studies [58,59] have looked into calibrating sensors to avoid drift over time. System redundancy is to have a secondary sensor to fall-back onto in case of a malfunction or outage in extreme scenarios.

    View all citing articles on Scopus

    Brendan Halloran obtained his Bachelor of Engineering degree in 2016 from the University of Wollongong, where he is currently pursuing his PhD in distributed processing for robotic vision with the School of Electrical, Computer and Telecommunications Engineering.

    Prashan Premaratne obtained his PhD in Electrical and Computer Engineering from the National University of Singapore in 2001 and his Bachelor of Engineering degree in 1997 from the University of Melbourne, Australia. He is a Senior Lecturer at the University of Wollongong, Australia and a Senior Member of IEEE.

    Peter James Vial is currently a Senior Lecturer at the School of Electrical, Computer and Telecommunications Engineering at Wollongong University. He completed his PhD in 2009. His main research interest is in wireless communications systems, especially related to Ultra-Wideband systems.

    View full text