While classical approaches to autonomous robot navigation currently enable operation in certain environments, they break down in tightly constrained spaces, e.g., where the robot needs to engage in agile maneuvers to squeeze between obstacles. Recent machine learning techniques have the potential to address this shortcoming, but existing approaches require vast amounts of navigation experience for training, during which the robot must operate in close proximity to obstacles and risk collision. In this paper, we propose to side-step this requirement by introducing a new machine learning paradigm for autonomous navigation called learning from hallucination (LfH), which can use training data collected in completely safe environments to compute navigation controllers that result in fast, smooth, and safe navigation in highly constrained environments. Our experimental results show that the proposed LfH system outperforms three autonomous navigation baselines on a real robot and generalizes well to unseen environments, including those based on both classical and machine learning techniques.
翻译:虽然典型的自主机器人导航方法目前允许在某些环境中运行,但它们在严格限制的空间破裂,例如机器人需要采用灵活机动的动作来挤压障碍。最近的机器学习技术有可能解决这一缺陷,但现有方法需要大量的导航培训经验,在培训期间,机器人必须在靠近障碍和风险碰撞的地方运行。在本文件中,我们提议通过引入一个自主导航的新型机器学习模式来绕过这一要求,该模式称为“从幻觉中学习 ” ( LfH ), 它可以使用在完全安全的环境中收集的培训数据来计算导航控制器,从而导致在高度受限制的环境中快速、顺畅和安全的导航。我们的实验结果表明,拟议的LfH 系统在真正的机器人上超越了三个自主导航基线,并且非常接近无形的环境,包括基于古典和机器学习技术的环境。