A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset.
翻译:能够进行自然语言教学的机器人自Jetsons漫画系列想象由一组关注的机器人助推员组成的休闲生活之前就是一个梦想,它是一个仍然顽固遥远的梦想。然而,在视觉和语言方法方面最近的进展在密切相关的领域取得了令人难以置信的进展。这很重要,因为机器人正在根据所看到的情况解释一种与视觉问答相似的视觉和语言教学过程。这两项任务都可以被解释为视觉基础序列到序列翻译问题,许多同样的方法也是适用的。为了能够并鼓励应用视觉和语言方法来解释视觉定位导航指示的问题,我们介绍了Mentport3D模拟器 -- -- 一种基于真实图像的大规模强化学习环境。我们利用这个模拟器,可以在未来支持一系列具有内涵的视觉和语言任务。我们为真实建筑的视觉基础自然语言导航提供了第一个基准数据集 -- -- 室到空间(R2R)数据集。