We investigate the visual cross-embodiment imitation setting, in which agents learn policies from videos of other agents (such as humans) demonstrating the same task, but with stark differences in their embodiments -- shape, actions, end-effector dynamics, etc. In this work, we demonstrate that it is possible to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos that are robust to these differences. Specifically, we present a self-supervised method for Cross-embodiment Inverse Reinforcement Learning (XIRL) that leverages temporal cycle-consistency constraints to learn deep visual embeddings that capture task progression from offline videos of demonstrations across multiple expert agents, each performing the same task differently due to embodiment differences. Prior to our work, producing rewards from self-supervised embeddings typically required alignment with a reference trajectory, which may be difficult to acquire under stark embodiment differences. We show empirically that if the embeddings are aware of task progress, simply taking the negative distance between the current state and goal state in the learned embedding space is useful as a reward for training policies with reinforcement learning. We find our learned reward function not only works for embodiments seen during training, but also generalizes to entirely new embodiments. Additionally, when transferring real-world human demonstrations to a simulated robot, we find that XIRL is more sample efficient than current best methods. Qualitative results, code, and datasets are available at https://x-irl.github.io
翻译:我们调查了视觉交叉体积模拟环境,让代理商从展示相同任务的其他代理商(例如人类)的视频中学习政策,但是其外形(形状、动作、最终效应动态等)存在明显差异。在这项工作中,我们证明有可能自动发现和学习交叉体积示范视频的基于视觉的奖励功能,这些视频对于这些差异来说是强大的。具体地说,我们展示了一种自我监督的跨体积积分反强化学习(XIRL)方法,该方法利用时间周期一致性限制来学习深层次的视觉嵌入,从多个专家代理商的演示的离线性视频中捕捉到任务进展,而每个专家代理商的演进因体现差异而不同。在工作之前,从自我监督的嵌入通常需要与参考轨迹保持一致。我们从表面上很难获得这些差异。我们的经验显示,如果嵌入者意识到任务的进展,只是将当前状态和目标状态之间的负距离,那么在学习的嵌入空间中作为培训政策的奖赏品,但作为强化式的升级学习成果,我们发现,在进行真正的模化过程中,我们只看到我们所学的模化工作。