In this paper, we introduce a method for unifying language, action, and state information in a shared embedding space to facilitate a range of downstream tasks in robot learning. Our method, Contrastive Language, Action, and State Pre-training (CLASP), extends the CLIP formulation by incorporating distributional learning, capturing the inherent complexities and one-to-many relationships in behaviour-text alignment. By employing distributional outputs for both text and behaviour encoders, our model effectively associates diverse textual commands with a single behaviour and vice-versa. We demonstrate the utility of our method for the following downstream tasks: zero-shot text-behaviour retrieval, captioning unseen robot behaviours, and learning a behaviour prior for language-conditioned reinforcement learning. Our distributional encoders exhibit superior retrieval and captioning performance on unseen datasets, and the ability to generate meaningful exploratory behaviours from textual commands, capturing the intricate relationships between language, action, and state. This work represents an initial step towards developing a unified pre-trained model for robotics, with the potential to generalise to a broad range of downstream tasks.
翻译:在本文中,我们介绍了一种方法,通过在共享嵌入空间中统一语言、行动和状态信息,以促进机器人学习中一系列的下游任务。我们的方法,对比语言、行动和状态预训练(CLASP),通过引入分布式学习,扩展了CLIP的公式,捕捉了行为文本对齐中的内在复杂性和一对多的关系。通过为文本和行为编码器使用分布输出,我们的模型有效地将多样的文本命令与单个行为相联系,反之亦然。我们演示了我们的方法在以下下游任务中的效用:零-shot文本-行为检索、对未见过的机器人行为进行字幕生成和为基于语言的强化学习学习行为先验。我们的分布式编码器在未见数据集上表现出卓越的检索和字幕生成性能,以及从文本命令中生成有意义的探索行为的能力,捕捉了语言、行动和状态之间的复杂关系。这项工作代表了向开发面向机器人技术的统一预训练模型迈出的一步,其具有潜力适应广泛的下游任务。