Abstract
This article presents the development of an autonomous navigation system for the intelligent sailboat Sensailor. This vehicle will be equipped with a Lidar and a camera for obstacle recognition. The recollected data will allow the sailboat to register the obstacle coordinates and determine the zones for safe navigation. Based on that, the intelligent system will apply AI algorithms for path planning, in order to get the shortest safe trajectory. This way, it will be guaranteed that the Sensailor can translate from an initial to a target position while optimizing time and resources.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The Sensailor is an unmanned surface vehicle, originally developed by the Polytechnic University of Catalonia [1]. This investigative work has as main objective the development of a more robust navigation system. To fulfill this, a NVIDIA Jetson Nano embedded system will be used to obtain data from a Lidar sensor and a camera for obstacle recognition. Then, the same CPU will be able to process the information and generate new trajectories to be follow by the controller system. This will increase the Sensailor versatility, allowing it to perform in different unknown environments and generating new feasible trajectories in real time.
2 Methodology
2.1 Autonomous Mapping and Localization
The autonomous sailboat will move in a highly changing environment. It is needed to ensure the integrity of the unmanned vehicle by implementing an estimation of its position and orientation, as well as mapping its surroundings. Multiple sensors are reinforced, information is collected, merging the data obtained from a camera with a Lidar point cloud. In the first instance, a layer is created to preprocess the filtering images and the filter of the LIDAR point cloud. Therefore, to improve the effectiveness of obstacle detection, a list of steps has been stipulated in Fig. 1 [1].
The first stage is eliminating the reflection generated on the surface of water since it interferes when detecting and tracking moving objects on the surface. Therefore, the input image turns into a grayscale, and the mean threshold function [2] converts it into a binary image Fig. 2 (a–b); consequently, the 9-square grid method is applied to detect the outline. If a difference of 8 pixels does not exist near a pixel, a contour is created.
The existing noise interference in the images taken by the USV also affects the detection of obstacles. If an image has poor quality, a denoising processing is performed, Fig. 2 (c–e), otherwise this step is omitted. To determine the quality of the image techniques such as MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), and SSIM (Structured Similarity Indexing Method) are used. Some results are in Fig. 3 (a–e). In addition, if it is necessary to remove the noise due to an unacceptable distortion, the BM3D (Block-matching and 3D filtering) algorithm is applied.
The Watershed algorithm is applied to segment complex images by detecting contours and finding overlapping objects within the image. For the detection of the sea sky, the Otsu method [3] is included, calculating the threshold to decrease dispersion in each segment, but increasing it when contrasted with others (Fig. 4). In contrast, by using the Hough transform, possible horizons are extracted, and a single horizon is chosen through probabilistic intervals [4].
It was decided to merge the high-quality images of the camera with the Lidar point cloud. The camera obtains high resolution, and the Lidar gets depth data more accurate.
The relationship between the coordinates of the arbitrary world and the coordinate system [5], for the image pixels, is estimated by taking the center of the lens as the system origin. The “X” axis, and the y-axis as parallel to the opposite sides of the phase. Finally, the “Z” axis, which represents the optical axis of the lens, is perpendicular to the image plane as shown in Eq. (1).
On the one hand, the image data from the camera is represented by a three-dimensional lattice cloud made by Lidar [6], for which a transformation matrix, Eq. (2), is created that assigns 3D points to 2D points.
Obstacles were detected through YOLOV3 following certain rules. Some areas in the image are selected, labeling those regions according to their position. The model uses a convolutional neural network that extracts data from the characteristic image by predicting the location and category according to the neural network model. It is contrasted with the label and the loss function is obtained by evaluating the deviation between the real scenario and the one predicted based on what the network learned (Fig. 4). The point cloud obtained from Lidar is continuously processed, obtaining the point cloud cluster that can detect the obstacle (Fig. 5).
Considering that weather conditions generate dispersion and interference points in the Lidar point cloud, a conditional elimination filter [7] is used to analyze the neighborhood between each point and calculate the Euclidean distance [8]. If a point has very few neighbors, it is considered atypical interference.
Fusion of camera and lidar data occurs by projecting the target point cloud cluster and image bounding boxes, calculating two-dimensional bounding boxes, and joining the information from both.
The USV can detect obstacles with its distance and angle direction relative to the autonomous sailboat. While, through the electronic compass and inertial navigation; the trajectory latitude, longitude, and angle are obtained. With Eq. (3), the longitude “J” and latitude “W” of the obstacle is found, being “U” the autonomous sailboat and “A” the obstacle, considering that the latitude differs by a degree every 111 km.
2.2 Planification
The workspace is discretized to be represented by a matrix where the cell size guarantees that the sailboat can always fit inside a cell. The coordinates of the obstacles are redefined based on the equivalent cell in the matrix. The available spaces within the matrix where the sailboat can move are called safe zones. On the other hand, non-safe zones are those that have an obstacle or a portion of it inside the cell, or the limits of the workspace. To guarantee a safe navigation of the sailboat, the norms stipulated in the Convention on the International Regulations for Preventing Collisions at Sea (COLREGs), must be considered [9].
It is intended to optimize the trajectories as the shortest safe path. Special attention must be paid to the redundancy of movements, preventing loops or going through the same segment several times without justification. For this, it was decided to use a combination of two trajectory planning algorithms, as shown in Fig. 6.
The intelligent algorithm A* evaluates the array by classifying its cells as occupied or free. For this, an admissible heuristic function is taken as a reference, considering the best possible scenario. This corresponds to the absence of obstacles in which the motion can be carried out by a single straight line. The heuristic function h is the Euclidean distance between the evaluated point and the final point. Each cell has an associated gi value corresponding to the number of steps that must be taken from the cell of origin to it. Adding both parameters determine the fi cost of each cell. The algorithm will choose the cell with the lowest cost to update its current position. If the trajectory reaches a ci cell with all its paths blocked, the algorithm returns to the previous position ci-1, eliminates the option of moving to ci cell, and selects the adjacent cell with the second lowest cost. These actions are repeated until the program finds a path that reaches the destination position through the cells with the lowest possible cost [10] (Fig. 7).
Since the system requires constant updating of the sampled space, the trajectory generation system is complemented with an algorithm based on potential fields. Unlike the A* algorithm, it can be implemented in real time, constantly updating the trajectory according to the motion of the obstacles. The direction of movement of the vehicle is established by the direction of the force resulting from the sum of the attraction and repulsion forces [11]. This way, the final trajectory followed by the USV is very close to the optimal route generated by the A* algorithm. This algorithm is guaranteed to work efficiently updating the trajectory in real time as required (Fig. 8).
3 Conclusions
In this research, it is demonstrated the solution to the autonomous navigation of a Sailboat. It was possible to detect obstacles by segmenting and labeling images. For the sailboat to be able to perceive any object close to it, the image quality has been corrected and noise has been reduced. Fast response was gotten when implementing Yolo (up to 30 FPS) by tagging obstacles in real-time. By having the data on the location of the obstacle, the sailboat automatically traces an obstacle avoidance route. First, it uses the intelligent search algorithm A* to generate the shortest safe path. Additionally, for navigation in real-time, the algorithm of potential fields was complimented. It was assumed that the GPS obtained correct coordinates without error. Therefore, in future research, a Kalman filter could be added to treat the location of the USV.
References
Moreno-Ortiz, A., et al.: Modelling of an intelligent control strategy for an autonomous Sailboat – SenSailor. In: 2022 5th International Conference on Advanced Systems and Emergent Technologies (IC_ASET), Hammamet, Tunisia, pp. 34–38 (2022). https://doi.org/10.1109/IC_ASET53395.2022.9765928
Zhang, W., et al.: Research on unmanned surface vehicles environment perception based on the fusion of vision and lidar. IEEE Access 9, 63107–63121 (2021). https://doi.org/10.1109/ACCESS.2021.3057863
Kim, H., et al.: Vision-based real-time obstacle segmentation algorithm for autonomous surface vehicle. IEEE Access 7, 179420–179428 (2019). https://doi.org/10.1109/ACCESS.2019.2959312
Kristan, M., Kenk, V.S., Kovačič, S., Perš, J.: Fast image-based obstacle detection from unmanned surface vehicles. IEEE Trans. Cybern. 46(3), 641–654 (2016). https://doi.org/10.1109/TCYB.2015.2412251
Liang, D., Liang, Y.: Horizon detection from electro-optical sensors under maritime environment. IEEE Trans. Instrum. Meas. 69(1), 45–53 (2020). https://doi.org/10.1109/TIM.2019.2893008
Ren, J., Zhang, J., Cui, Y.: Autonomous obstacle avoidance algorithm for unmanned surface vehicles based on an improved velocity obstacle method. ISPRS Int. J. Geo-Inf. 10(9) (2021). https://doi.org/10.3390/ijgi10090618
Wu, P., et al.: Autonomous obstacle avoidance of an unmanned surface vehicle based on cooperative manoeuvring. Ind. Robot. 44(1), 64–74 (2017). https://doi.org/10.1108/IR-04-2016-0127
Muhovic, J., Mandeljc, R., Bovcon, B., Kristan, M., Pers, J.: Obstacle tracking for unmanned surface vessels using 3-D point cloud. IEEE J. Oceanic Eng. 45(3), 786–798 (2020). https://doi.org/10.1109/JOE.2019.2909507
Polvara, R., et al.: Obstacle avoidance approaches for autonomous navigation of unmanned surface vehicles. J. Navig. 71(1), 241–256 (2018). https://doi.org/10.1017/S0373463317000753
Benjamin, M.R., Curcio, J.A.: COLREGS-based navigation of autonomous marine vehicles. In: 2004 IEEE/OES Autonomous Underwater Vehicles, pp. 32–39 (2004). https://doi.org/10.1109/AUV.2004.1431190
Ju, C., Luo, Q., Yan, X.: Path planning using artificial potential field method and a-star fusion algorithm. In: Global Reliability and Prognostics and Health Management, PHM-Shanghai (2020). https://doi.org/10.1109/PHM-SHANGHAI49105.2020.9280929
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this paper
Cite this paper
Fajardo-Pruna, M., Sanchez-Orozco, D., Torres-Medina, K., Lopez-Estrada, L., Tutiven, C., Vidal, Y. (2023). Autonomous Navigation for an Intelligent Sailboat - Sensailor. In: Vizán Idoipe, A., García Prada, J.C. (eds) Proceedings of the XV Ibero-American Congress of Mechanical Engineering. IACME 2022. Springer, Cham. https://doi.org/10.1007/978-3-031-38563-6_65
Download citation
DOI: https://doi.org/10.1007/978-3-031-38563-6_65
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-38562-9
Online ISBN: 978-3-031-38563-6
eBook Packages: EngineeringEngineering (R0)