Files

Abstract

The degree of intelligence built-in in today's vehicles in constantly on the rise. The vehicles are being equipped with sensors, with the goal to estimate the state of the vehicle and the environment surrounding it. Intelligent algorithms that process the sensory data can give their output at different levels, ranging from simple warnings, over evasive maneuvers (such as emergency braking), to complete autonomy. While it has been demonstrated that autonomous vehicles can rely solely on their on-board sensors, their performance can be optimized through cooperation with other road vehicles. Information coming from infrastructure can be fused in as well. This is where the communication between vehicles, as well as between vehicles and the infrastructure, comes into play. The main benefits of cooperation include larger coverage and extended situational awareness through sharing sensor data and vehicle intentions (trajectories). In this thesis, we address the cooperative perception problem. To solve this problem efficiently, we construct an end-to-end framework in three steps. First, we design an experimental platform that allows for testing our cooperative perception algorithms. In particular, we equip two fully electric Citroën C-ZERO cars with sensors, on-board computers and communication equipment. At the same time, we reproduce our platform in Webots, a high-fidelity simulation tool originally developed for mobile robots and recently upgraded for road vehicles. We develop and calibrate vehicle and sensor models with the goal to reproduce the real-world conditions as closely as possible, and in turn facilitate the deployment of algorithms developed in simulation on real cars. Second, we design two cooperative algorithms for tracking multiple objects (cars and pedestrians) using laser and camera sensors. The key components of the algorithms are our cooperative fusion methods, which allow for fusion of data obtained from a cooperative vehicle with the data obtained from on-board sensors. The algorithms are first evaluated in simulation and tested in specific scenarios. For instance, to showcase the power of our approach in a potential application, we design an overtaking decision algorithm that uses our cooperative perception algorithm as a baseline. The overtaking application proves the added-value of cooperative perception in situations with occluded or insufficient sensory field of view. Third, we deploy a selected algorithm on real vehicles and validate it in real time. A distributed software framework is designed for this purpose, enabling a relatively smooth transition from simulated to real environments. Moreover, the cooperative perception algorithm is subsequently enhanced for operating in more complex scenarios. Furthermore, we develop a cooperative localization method to achieve increased accuracy in cooperative vehicles' relative localization, thus enabling our cooperative perception algorithm to work properly when deployed on moving vehicles. Overall, we develop an end-to-end framework for cooperative perception, which unifies many different sensory technologies. Despite the end goal has always been that of deploying the framework on our test vehicles, we make substantial effort to keep it as general as possible. Our framework represents a stepping stone towards more complex, multi-vehicle automated systems.

Details

PDF