Neural network (NN)-based methods have emerged as an attractive approach for robot motion planning due to strong learning capabilities of NN models and their inherently high parallelism. Despite the current development in this direction, the efficient capture and processing of important sequential and spatial information, in a direct and simultaneous way, is still relatively under-explored. To overcome the challenge and unlock the potentials of neural networks for motion planning tasks, in this paper, we propose STP-Net, an end-to-end learning framework that can fully extract and leverage important spatio-temporal information to form an efficient neural motion planner. By interpreting the movement of the robot as a video clip, robot motion planning is transformed to a video prediction task that can be performed by STP-Net in both spatially and temporally efficient ways. Empirical evaluations across different seen and unseen environments show that, with nearly 100% accuracy (aka, success rate), STP-Net demonstrates very promising performance with respect to both planning speed and path cost. Compared with existing NN-based motion planners, STP-Net achieves at least 5x, 2.6x and 1.8x faster speed with lower path cost on 2D Random Forest, 2D Maze and 3D Random Forest environments, respectively. Furthermore, STP-Net can quickly and simultaneously compute multiple near-optimal paths in multi-robot motion planning tasks
翻译:由于NN模型的强大学习能力及其固有的高度平行性,基于神经网络的方法已成为机器人运动规划的一个有吸引力的方法。尽管目前朝这个方向发展,但以直接和同时的方式有效获取和处理重要的连续和空间信息的工作仍然相对不足。为了克服挑战并释放神经网络在运动规划任务方面的潜力,我们在本文件中提议STP-Net,一个端到端学习框架,可以充分提取和利用重要的时空信息,形成一个高效的神经运动计划。通过将机器人运动解释为视频剪辑,机器人运动规划转变为视频预测任务,可以由STTP-Net以空间和时间效率两种方式进行。为了克服挑战并释放神经网络对运动规划任务的潜在潜力,我们在本文件中提议STP-Net,一个端到端学习框架,可以充分提取和利用重要的时空信息,从而形成一个高效的神经运动规划员。与现有的NNP运动规划员、STP-Net在接近5x、2.6x和1.8x快速的森林-Maz-D双轨、快速的Forest-rent-rent-ral-ral-ral-rass II 和同步,可以分别在接近5TP-D多路段、2.6x-rent-ral-ral-rent-ral-ral-ral-ral-ral-ral-ral-p-ral-ral-lx 和同步2 和同步速度上快速进行。