A UAV for bridge inspection: Visual servoing control law with orientation limits

https://doi.org/10.1016/j.autcon.2006.12.010Get rights and content

Abstract

This paper describes the dynamics of an unmanned aerial vehicle (UAV) for monitoring of structures and maintenance of bridges. It presents a novel control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position in an unknown 3D environment. The new control law uses the homography matrix computed from the information obtained from the vision system. The control scheme will be derived with backstepping techniques. In order to keep the camera's field of view, the control law uses saturation functions for bounding the UAV orientation and limiting it to very small values.

Introduction

Rapid advances in control theories, computing abilities, communications, and sensing technology offer a great tool for the unmanned aerial vehicles technology. In the last two decades a great interest in the UAV technology has risen in military applications, and many projects have been studied and applied.

In LCPC-Paris (Laboratoire Central des Ponts et Chaussées), we have started a project pertaining to civil applications of a UAV: bridge inspection and traffic surveillance. This project for bridges’ inspection is called PMI (Plate-forme Mobile d'Instrumentation) which is a UAV capable of quasi-stationary flights whose mission is the inspection of bridges and location of defects and cracks.

Ageing infrastructure has become a major concern in western European countries. In France, roughly half of the bridge life cost is due to repairing and maintenance. Since many bridges were built in the sixties, health diagnostics and assessment of residual life already proves very important and will become increasingly crucial in the following decades. To this end, systematic bridge inspection has been organized for a long time and well-defined visual observation and reporting task are periodically carried out on bridges [1], [2], this inspection must be done at least once every 6 years to control the evolution of defects and cracks. Current visual inspection involves a rather heavy logistics. A large platform mounted on a heavy truck is operated by a team of highly specialized inspectors, who work in extremely difficult and risky conditions, looking for small defects or damages in sometimes hardly accessible components of the structure (i.e. cables). Of course, the bridge under inspection is closed to traffic. This gives rise to a classical footbridge. Fig. 1 shows an example of a classical footbridge.

Progresses in visual bridge inspection look possible in the direction of remote sensing and process automating. The use of unmanned aerial vehicles could be of a particular interest as camera carriers and image transmitters. The UAV could follow a predetermined path or could move by visual servoing and detects by the means of image treatment the size and location of defects and cracks. The use of a drone for inspection will give us multiple advantages over traditional inspection method:

  • Reducing work accident risk.

  • Budget reduction: less logistics and less working hours.

  • The bridge will ultimately not be closed for traffic.

  • The possibility of using nondestructive techniques (infrared, shearography, …) for crack's detection.

For the sake of this application, a novel control law based on visual servoing is derived to control the UAV. Almost all control theories for UAV's are built around a vision system, using visual servoing as a control method. A typical vision system will include an off-the-shelf camera, Inertial Navigation System (INS) and in some cases a Global Positioning System (GPS).

In this paper, we consider a general mechanical model of a flying robot capable of quasi-stationary maneuvers. Then we derive a control law from classical backstepping techniques [3] of autonomous hovering system based on operating the translational from the rotational rigid body (airframe) dynamics. A novel approach is also presented; it will limit the orientation of the UAV. Limiting the orientation will ensure that the object will remain in the camera's field of view. We will prove the stability of such a strategy based on saturation functions. Lastly, we present some simulation results of the new control law and the feasibility trial executed on the viaduct of Saint Cloud in Paris.

Section snippets

Related work

The principal question that naturally arises while using vision in control application is:

How should the information from vision sensors be used for robotic control purposes?

There exist three different methods of visual servoing: 3D, 2D and 2½D. 3D visual servoing techniques that involve reconstruction of the target pose with respect to the camera are called: position based visual servoing (3D visual servoing). This kind of techniques leads to a Cartesian motion planning problem. Its main

A general UAV dynamic model

To derive a general dynamic model for a UAV is not an easy task because each model has its own capabilities and aerodynamical properties. In this section, we will derive mechanical equations for UAV's in hover or quasi-stationary conditions.

Let F = {Ex, Ey, Ez} denote a right-hand inertial or world frame, such that Ez denotes the vertical direction downwards into the earth. Let ξ = (x, y, z) denote the position of the center of mass of the object in the frame F relative to a fixed origin in F

Camera modeling and visual servoing method

In this connection we will present a brief discussion of the camera projection model and then introduce the homography matrix to use the 2½D visual servoing method.

For the control law we will derive the error dynamics equations from Eqs.(1), (2), (3), (4), they will be based on a defined visual error. Using the Lyapunov control design, we will find the desired force u and the desired orientation Rd to converge the UAV to a position described by a desired image.

Limiting the UAV orientation

In the theoretical developments based on the backstepping [3], the proposed control law assures an exponential convergence towards the desired position. However, this type of convergence is not recommended when the vehicle is initially far from the desired position. Indeed, the dynamic model based on quasi-stationary conditions (hovering conditions) is not valid anymore, because the dynamics of such a convergence will provoke a different flight mode. Moreover, the target image may leave the

Attitude dynamics control

The next step of the control design involves the control of the attitude dynamics such that the error   I converges exponentially to zero. We will use a quaternion representation of the rotation to obtain a smooth control for . The attitude deviation is parameterized by a rotation γ˜ around the unit vector . Using Rodrigues’ formula [11] one hasR˜=I+sin(γ˜)sk(k˜)+(1cos(γ˜))sk(k˜)2.The quaternion representation describing the deviation is given by [13]η˜:=sinγ˜2k˜,η˜d=cosγ˜2;with||η˜|

Simulation results

In order to evaluate the efficiency of the proposed servoing technique with orientation limits, simulation results for a hovering robot are presented. The experiment considers a basic stabilization mission. The target is composed of five points: four on the vertices of a planar square and one on its center. The available signals are the pixel coordinates of the five points observed by the camera.

For this experiment, it is assumed that the desired plane is perpendicular to the line of sight. The

Conclusion

This paper reflects a control law of a UAV under some practical restrictions. It presents a control strategy for the autonomous flight with orientation limits in order to keep the object in the camera's view field. This strategy only requires that the system is able to measure, with a video camera, the image plane mapping features on a planar surface. A control law based on separating translation and orientation dynamics is exploited. It also limits the UAV's orientation to small values of

References (14)

  • A. Teel

    Global stabilization and restricted tracking for multiple integrators with bounded controls

    Syst. Control. Lett.

    (1992)
  • B. Godart
  • Instruction technique pour la surveillance et l'entretien des ouvrages d'art, Documents edited and diffused by the lcpc and setra

    (1991)
  • T. Hamel et al.

    Visual servoing of under-actuated dynamic rigid body system: An image space approach

  • S. Hutchinson et al.

    A tutorial on visual servocontrol

    IEEE Trans. Robot. Autom.

    (1996)
  • E. Malis, Contribution à la modélisation et à la commande en asservissement visuel, Ph.D. thesis, Université de Rennes...
  • D. Suter et al.

    Visual Servoing Based on Homography Estimation for the Stabilization of an X4-flyer

    (2002)
There are more references available in the full text version of this article.

Cited by (396)

  • Survey of robotics technologies for civil infrastructure inspection

    2023, Journal of Infrastructure Intelligence and Resilience
View all citing articles on Scopus
View full text