Waterline and obstacle detection in images from low-cost autonomous boats for environmental monitoring
Introduction
The water quality monitoring process costs every year more than 1 billion EUR at the European Union level [1]. In particular, analyses of pollution in large rivers or lakes supplying drinking water are in the range of 150,000 to 400,000 EUR. The current approach, based on field sample and laboratory analysis, is unable to assess temporal and spatial variation in the contaminants of concern. This means that, in case of accident, there is the possibility of taking inadequate and late decisions for mitigating the pollution impacts.
The use of autonomous surface vehicles (ASVs) for persistent large-scale monitoring of aquatic environments is a valid and efficient alternative to the traditional manual sampling approach [2]. ASVs are capable of undertaking long-endurance missions and carrying multiple sensors (e.g., for measuring electrical conductivity and dissolved oxygen) to obtain water quality indicators [3]. There exist commercial ASVs specifically developed for water quality monitoring. An example is the Lutra mono hull boat produced by Platypus,1 shown in Fig. 1. Lutra boats are used in the EU-funded Horizon 2020 project INTCATCH,2 which aims to develop a new paradigm in the monitoring of river and lake water quality, by bringing together, validating, and exploiting a range of innovative tools into a single efficient and user-friendly model.
To achieve true autonomous navigation, an ASV must sense its environment and avoid possible collisions. In this paper, we propose to use vision-based sensing to reach this goal, focusing on the domain of small, low-cost ASVs where sensors commonly utilized for localization purposes (e.g., LiDARs) and powerful processing units cannot be used due to price caps.
The contribution of this work is two-fold: (i) we present a pixel-wise deep learning based method able to segment images captured by cameras on low-cost ASV into water/non-water regions, and (ii) we describe an algorithm for extracting the horizon line (called waterline in this context) from the segmented images and for detecting potential floating obstacles. It is worth noticing that, the relationship between waterline and obstacles is heavily dependent on the pitch and roll angles of the ASV, which are only roughly measured by cheap gyroscope and accelerometer sensors available in low-cost boats. Our goal is to use the waterline to reduce the search space for the detection of potential obstacles and to deal with the pitch and roll movements, which can cause misdetections, by dynamically tracking over-time the obstacle positions. In particular, we use Convolutional Neural Networks (CNNs) for image segmentation, edge detection and RANSAC for waterline extraction, and a multi-object tracker based on cost computation.
Although CNNs were previously introduced in other mobile robotics domains (e.g., indoor wheeled and quadrotors), and ASV obstacle detection techniques were employed with classical computer vision methods (e.g., optical flow), the two approaches have never been combined yet.
We provide the complete source code of our method (see Section 6) and the dataset of images and videos created for testing (see the INTCATCH Vision Dataset.3 ) This dataset contains annotated data that can be used for developing supervised approaches for object detection.
The remainder of the paper is structured as follows. Anoverview of the INTCATCH system is given in Section 2 and related work is discussed in Section 3. The proposed segmentation algorithm is presented in Section 4, while the waterline and obstacle detection process is described in Section 5. Experimental results are shown in Section 6. Finally, conclusions are drawn in Section 7.
Section snippets
System overview
The realization of an easy-to-use system that can be used by non-expert users is a key aspect of the INTCATCH project. A general overview of the INTCATCH autonomous boat system is shown in Fig. 2. Its main components are briefly described below.
Boat. Two different models of the Lutra boats were acquired by the INTCATCH consortium to cover different deployment scenarios: (1) Prop boat, having in-water propellers to be deployed in as little as 25 cm of water; (2) Airboat, with a covered fan
Related work
Monocular vision-based obstacle detection for ASVs has previously received some interest. The approaches presented in the literature are mainly unsupervised ones.
Water/non-water classification
Raw images coming from the camera mounted on the boat are used as input for a Fully Convolutional Neural Network that classifies pixels as water or non-water ones. In the following we describe the network architecture, the dataset used for training, and provide details on the learning process.
Waterline computation and obstacle detection
Once the segmentation mask for water/non-water is ready, two further steps are performed, namely, waterline computation and obstacle detection. Details about these two steps are provided below.
Experimental results
We tested both the capacity of our approach to segment images taken on the ASV and the accuracy of the predicted waterline calculated from the segmented image. Five different network configurations were considered.
Conclusions
In this paper, we have shown the use of a deep learning based method for waterline and object detection on a low-cost ASV. All the pixels in the images captured by the ASV are labeled into water and not-water classes using a CNN. A line is fit to the edges in this binary class mask to create the waterline prediction. Object laying under the waterline are detected and tracked over-time to generate alarms that can be sent to the navigation system.
A quantitative experimental evaluation has been
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
We are grateful to Stefano Aldegheri for helping in implementing the waterline detection algorithm on the Jetson board and in generating the experimental results.
This work is partially funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No 689341.
Lorenzo Steccanella is a Ph.D. student in the area of Reinforcement Learning at UPF, since October 2018. Before starting the Ph.D. he worked in Blue Brain Project EPFL as an engineer in the machine learning area and as a research assistant at the University of Verona in the area of Robotics and Artificial Intelligence. His main interests are in the broad area of Artificial Intelligence and Robotics with a focus on Reinforcement Learning methods.
References (34)
On the approximate realization of continuous mappings by neural networks
Neural Netw.
(1989)- et al.
Skin lesion image segmentation using delaunay triangulation for melanoma detection
Comput. Med. Imaging Graph.
(2016) Detailed assessment of the market potential, and demand for, an eu etv scheme
(2013)- et al.
Quantifying spatiotemporal greenhouse gas emissions using autonomous surface vehicles
J. Field Robotics
(2017) A marine autonomous surface craft for long-duration, spatially explicit, multidisciplinary water column sampling in coastal and estuarine systems
J. Atmos. Ocean. Technol.
(2015)- A. Castellini, J. Blum, D. Bloisi, A. Farinelli, Intelligent battery management for aquatic drones based on task...
- A. Castellini, G. Chalkiadakis, A. Farinelli, Influence of state-variable constraints on partially observable monte...
- et al.
A formal basis for the heuristic determination of minimum cost paths
IEEE Trans. Syst. Sci. Cybern.
(1968) - et al.
Unsupervised activity recognition for autonomous water drones
- et al.
Subspace clustering for situation assessment in aquatic drones
Visual obstacle avoidance for autonomous watercraft using smartphones
Obstacle detection for image-guided surface water navigation
Effective waterline detection of unmanned surface vehicles based on optical images
Sensors
Obstacle detection for lake-deployed autonomous surface vehicles using RGB imagery
PloS one
MP-GeneticSynth: inferring biological network regulations from time series
Bioinformatics
Scalable object detection using deep neural networks
Cited by (57)
Efficient multi-branch segmentation network for situation awareness in autonomous navigation
2024, Ocean EngineeringAn improved single short detection method for smart vision-based water garbage cleaning robot
2024, Cognitive RoboticsApplication and evaluation of direct sparse visual odometry in marine vessels
2022, IFAC-PapersOnLineDevelopment of a solar-powered small autonomous surface vehicle for environmental measurements
2022, Energy Conversion and ManagementCitation Excerpt :Consequently, the use of autonomous boats for persistent monitoring of aquatic environments may be an efficient alternative to the traditional manual sampling approach [25]. For example, a method for waterline and obstacle detection designed for low-cost ASV used in environmental monitoring was presented in Ref. [26]. The proposed approach consisted of two steps: a pixel-wise segmentation of the image was used to generate a binary mask separating water and non-water regions, and then the mask was analysed to infer the position of the waterline, which in turn was used for detecting obstacles.
Reinforcement learning applications in environmental sustainability: a review
2024, Artificial Intelligence Review
Lorenzo Steccanella is a Ph.D. student in the area of Reinforcement Learning at UPF, since October 2018. Before starting the Ph.D. he worked in Blue Brain Project EPFL as an engineer in the machine learning area and as a research assistant at the University of Verona in the area of Robotics and Artificial Intelligence. His main interests are in the broad area of Artificial Intelligence and Robotics with a focus on Reinforcement Learning methods.
Dr Domenico D. Bloisi is assistant professor (tenure track) with the Department of Mathematics, Computer Science, and Economics at University of Basilicata (Italy). Formerly, he was assistant professor at Sapienza University of Rome (Italy) and assistant professor at University of Verona (Italy). He received his Ph.D., M.Sc., and B.Sc. degrees in Computer Engineering from Sapienza University of Rome in 2010, 2006 and 2004, respectively. His research interests are related to computer vision (including object detection and visual tracking), multi-sensor data fusion, and robotics. Moreover, Dr Bloisi collaborates with the SPQR robot soccer team, which is involved in the RoboCup competitions, and works on automatic methods for medical image segmentation for melanoma cancer prevention (in collaboration with the medical staff of the Istituto Dermopatico dell’Immacolata IDI-IRCCS).
Alberto Castellini is assistant professor at the computer science department of the Verona University, Italy. He received his Ph.D. in computer science from Verona University and had research and teaching experiences at both Verona University and Potsdam University/Max Planck Institute in Potsdam, Germany. His research interests are related to predictive modeling of complex systems, computational data analysis, statistical learning and artificial intelligence.
Alessandro Farinelli is associate professor at University of Verona, since December 2014. His research interests comprise theoretical and practical issues related to the development of Artificial Intelligent Systems applied to robotics. In particular, he focuses on coordination, decentralized optimization and information integration for Multi-Agent and Multi-Robot systems, control and evaluation of autonomous mobile robots. He was principal investigator for several national and international research projects in the broad area of Artificial Intelligence for robotic systems. He co-authored more than 100 peer-reviewed scientific contributions in top international journals and conferences.