Being able to explore unknown environments is a requirement for fully autonomous robots. Many learning-based methods have been proposed to learn an exploration strategy. In the frontier-based exploration, learning algorithms tend to learn the optimal or near-optimal frontier to explore. Most of these methods represent the environments as fixed size images and take these as inputs to neural networks. However, the size of environments is usually unknown, which makes these methods fail to generalize to real world scenarios. To address this issue, we present a novel state representation method based on 4D point-clouds-like information, including the locations, frontier, and distance information. We also design a neural network that can process these 4D point-clouds-like information and generate the estimated value for each frontier. Then this neural network is trained using the typical reinforcement learning framework. We test the performance of our proposed method by comparing it with other five methods and test its scalability on a map that is much larger than maps in the training set. The experiment results demonstrate that our proposed method needs shorter average traveling distances to explore whole environments and can be adopted in maps with arbitrarily sizes.
翻译:能够探索未知环境是完全自主的机器人的一项要求。 许多基于学习的方法已被提出来学习探索策略。 在以边境为基础的探索中,学习算法往往会学习最佳或接近最佳的前沿来探索。这些方法大多代表环境,作为固定大小的图像,并将这些图像作为神经网络的投入。然而,环境的大小通常并不为人所知,这使得这些方法无法概括到真实的世界情景。为了解决这一问题,我们提出了一个基于4D点-球类信息的新的国家代表法,包括位置、边界和距离信息。我们还设计了一个神经网络,可以处理这些4D点-球类信息,并为每个前沿生成估计值。然后,利用典型的强化学习框架对神经网络进行培训。我们测试我们拟议方法的性能,将它与其他五种方法进行比较,并在比培训成套地图大得多的地图上测试其可缩放性。实验结果表明,我们提出的方法需要缩短平均旅行距离,以探索整个环境,并可以在任意大小的地图上采用。