Humans are excellent at understanding language and vision to accomplish a wide range of tasks. In contrast, creating general instruction-following embodied agents remains a difficult challenge. Prior work that uses pure language-only models lack visual grounding, making it difficult to connect language instructions with visual observations. On the other hand, methods that use pre-trained multimodal models typically come with divided language and visual representations, requiring designing specialized network architecture to fuse them together. We propose a simple yet effective model for robots to solve instruction-following tasks in vision-based environments. Our \ours method consists of a multimodal transformer that encodes visual observations and language instructions, and a transformer-based policy that predicts actions based on encoded representations. The multimodal transformer is pre-trained on millions of image-text pairs and natural language text, thereby producing generic cross-modal representations of observations and instructions. The transformer-based policy keeps track of the full history of observations and actions, and predicts actions autoregressively. Despite its simplicity, we show that this unified transformer model outperforms all state-of-the-art pre-trained or trained-from-scratch methods in both single-task and multi-task settings. Our model also shows better model scalability and generalization ability than prior work.
翻译:人类非常擅长理解语言和愿景,以完成各种各样的任务。相比之下,创建通用教学制成的代理物仍是一个困难的挑战。以前使用纯语言型号的工作缺乏视觉基础,因此难以将语言指示与视觉观察联系起来。另一方面,使用预先培训的多式联运模式的方法通常具有语言和视觉表达方式的分裂性,需要设计专门的网络结构,以将它们结合在一起。我们为机器人提出了一个简单而有效的模型,用以在基于视觉的环境中解决教学任务。我们的计算方法包括一种多式变压器,该变压器将视觉观察和语言指示编码,以及一种基于变压器的政策,该变压器预测基于编码的表达方式的行动。多式变压器对数百万个图像配对和自然语言文本进行了预先培训,从而产生通用的跨式的观察和指示表达方式。基于变压器的政策可以跟踪观测和行动的全部历史,并自动预测各种行动。尽管其简单,但我们展示了这种统一的变压式模型超越了所有状态的模型前期或经过培训的模型和经过培训的多式变压能力,也展示了我们以往总的能力。