While imitation learning for vision based autonomous mobile robot navigation has recently received a great deal of attention in the research community, existing approaches typically require state action demonstrations that were gathered using the deployment platform. However, what if one cannot easily outfit their platform to record these demonstration signals or worse yet the demonstrator does not have access to the platform at all? Is imitation learning for vision based autonomous navigation even possible in such scenarios? In this work, we hypothesize that the answer is yes and that recent ideas from the Imitation from Observation (IfO) literature can be brought to bear such that a robot can learn to navigate using only ego centric video collected by a demonstrator, even in the presence of viewpoint mismatch. To this end, we introduce a new algorithm, Visual Observation only Imitation Learning for Autonomous navigation (VOILA), that can successfully learn navigation policies from a single video demonstration collected from a physically different agent. We evaluate VOILA in the photorealistic AirSim simulator and show that VOILA not only successfully imitates the expert, but that it also learns navigation policies that can generalize to novel environments. Further, we demonstrate the effectiveness of VOILA in a real world setting by showing that it allows a wheeled Jackal robot to successfully imitate a human walking in an environment using a video recorded using a mobile phone camera.
翻译:以视觉为基础的自主移动机器人导航的模仿学习最近引起了研究界的极大关注,但现有方法通常需要使用部署平台收集的州行动演示。然而,如果人们不能轻易地安装平台来记录这些演示信号,或者更糟的是示威者根本无法进入平台呢?在这样的情况下,模拟学习以视觉为基础的自主导航是否甚至有可能?在这项工作中,我们假设答案是肯定的,观察(IfO)文献的仿照(IfO)的最新想法可以让机器人学会只使用由示范者收集的自我中心视频来导航,即使存在观点不匹配的情况。为此,我们引入一种新的算法,即视觉观察只进行自动导航的模仿学习(VOILA),它能够成功地从从从一个物理上不同的代理人收集的单一视频演示中学习导航政策。我们用光动的AirSimSim模拟器对VILA进行了评价,并表明VOIA不仅能够成功地模仿专家,而且它也能学习能够向新环境普及的自我中心导航政策。我们进一步展示了在使用一个真实的机器人环境中的视频环境。