Coverage path planning (CPP) is the problem of finding a path that covers the entire free space of a confined area, with applications ranging from robotic lawn mowing and vacuum cleaning, to demining and search-and-rescue tasks. While offline methods can find provably complete, and in some cases optimal, paths for known environments, their value is limited in online scenarios where the environment is not known beforehand. In this case, the path needs to be planned online while mapping the environment. We investigate how suitable reinforcement learning is for this challenging problem, and analyze the involved components required to efficiently learn coverage paths, such as action space, input feature representation, neural network architecture, and reward function. Compared to existing classical methods, this approach allows for a flexible path space, and enables the agent to adapt to specific environment dynamics. In addition to local sensory inputs for acting on short-term obstacle detections, we propose to use egocentric maps in multiple scales based on frontiers. This allows the agent to plan a long-term path in large-scale environments with feasible computational and memory complexity. Furthermore, we propose a novel total variation reward term for guiding the agent not to leave small holes of non-covered free space. To validate the effectiveness of our approach, we perform extensive experiments in simulation with a 2D ranging sensor on different variations of the CPP problem, surpassing the performance of both previous RL-based approaches and highly specialized methods.
翻译:暂无翻译