We focus on the task of language-conditioned grasping in clutter, in which a robot is supposed to grasp the target object based on a language instruction. Previous works separately conduct visual grounding to localize the target object, and generate a grasp for that object. However, these works require object labels or visual attributes for grounding, which calls for handcrafted rules in planner and restricts the range of language instructions. In this paper, we propose to jointly model vision, language and action with object-centric representation. Our method is applicable under more flexible language instructions, and not limited by visual grounding error. Besides, by utilizing the powerful priors from the pre-trained multi-modal model and grasp model, sample efficiency is effectively improved and the sim2real problem is relived without additional data for transfer. A series of experiments carried out in simulation and real world indicate that our method can achieve better task success rate by less times of motion under more flexible language instructions. Moreover, our method is capable of generalizing better to scenarios with unseen objects and language instructions.
翻译:我们的注意力集中在以拼凑方式掌握语言的掌握任务上, 在这种方式中, 机器人应该根据语言指令来掌握目标对象。 先前的工作分别进行视觉地面定位, 以定位目标对象, 并为该对象产生一种掌握力。 但是, 这些工程需要用物体标签或视觉属性来定位, 而这需要用手工设计规则, 并限制语言指令的范围。 在本文中, 我们建议用以对象为中心的表达方式联合模拟愿景、 语言和行动。 我们的方法在更灵活的语言指令下适用, 不受视觉地面错误的限制。 此外, 利用预先训练的多式模型和掌握模型的强力前科, 样本效率得到有效提高, 模拟和真实问题在没有额外传输数据的情况下重新出现。 在模拟和现实世界中进行的一系列实验表明, 我们的方法可以通过在更灵活的语言指令下减少动作的时间, 实现更好的任务成功率。 此外, 我们的方法能够更好地推广以看不见的物体和语言指令为设想。</s>