We investigate the visual cross-embodiment imitation setting, in which agents learn policies from videos of other agents (such as humans) demonstrating the same task, but with stark differences in their embodiments -- shape, actions, end-effector dynamics, etc. In this work, we demonstrate that it is possible to automatically discover and learn vision-based reward functions from cross-embodiment demonstration videos that are robust to these differences. Specifically, we present a self-supervised method for Cross-embodiment Inverse Reinforcement Learning (XIRL) that leverages temporal cycle-consistency constraints to learn deep visual embeddings that capture task progression from offline videos of demonstrations across multiple expert agents, each performing the same task differently due to embodiment differences. Prior to our work, producing rewards from self-supervised embeddings has typically required alignment with a reference trajectory, which may be difficult to acquire. We show empirically that if the embeddings are aware of task-progress, simply taking the negative distance between the current state and goal state in the learned embedding space is useful as a reward for training policies with reinforcement learning. We find our learned reward function not only works for embodiments seen during training, but also generalizes to entirely new embodiments. We also find that XIRL policies are more sample efficient than baselines, and in some cases exceed the sample efficiency of the same agent trained with ground truth sparse rewards.
翻译:我们调查了视觉交叉渗透模拟环境,在这种环境中,代理从展示相同任务的其他代理人(例如人类)的视频中学习政策,但是其外形(形状、动作、最终效应动态等)存在明显差异。在这项工作中,我们证明有可能自动发现和学习交叉融合示范视频中基于视觉的奖励功能,这些视频对于这些差异来说是强大的。具体地说,我们展示了一种自我监督的跨融合反强化学习(XIRL)方法,它利用时间周期一致性限制来学习从多个专家代理人的演示脱线视频中获取任务进展的深层视觉嵌入,而这种深层视觉嵌入则反映了多个专家代理人的演示的脱线视频,由于体现差异,每个人都执行同样的任务。在工作之前,从自我监督的嵌入中产生的奖励通常需要与参考轨迹保持一致,而这可能很难获得。我们从经验上看,如果嵌入中了解任务进展,只是将当前状态和目标状态与目标状态之间的负面距离(XIRL)用来作为对培训政策的一种奖励,但作为强化性学习的强化性学习案例,我们发现我们所学到的标定的标值功能通常不会超过XI的试底。