In this work, we consider one-shot imitation learning for object rearrangement tasks, where an AI agent needs to watch a single expert demonstration and learn to perform the same task in different environments. To achieve a strong generalization, the AI agent must infer the spatial goal specification for the task. However, there can be multiple goal specifications that fit the given demonstration. To address this, we propose a reward learning approach, Graph-based Equivalence Mappings (GEM), that can discover spatial goal representations that are aligned with the intended goal specification, enabling successful generalization in unseen environments. Specifically, GEM represents a spatial goal specification by a reward function conditioned on i) a graph indicating important spatial relationships between objects and ii) state equivalence mappings for each edge in the graph indicating invariant properties of the corresponding relationship. GEM combines inverse reinforcement learning and active reward learning to efficiently improve the reward function by utilizing the graph structure and domain randomization enabled by the equivalence mappings. We conducted experiments with simulated oracles and with human subjects. The results show that GEM can drastically improve the generalizability of the learned goal representations over strong baselines.
翻译:在这项工作中,我们考虑对天体重新排列任务进行一次性模拟学习,即AI代理机构需要观看单一的专家演示,并学习在不同环境中执行相同任务。为了实现强烈的概括化,AI代理机构必须推断任务的空间目标规格。然而,可能有适合给定演示的多重目标规格。为了解决这个问题,我们建议一种奖励学习方法,即基于图表的等效映射(GEM),它能够发现符合预定目标规格的空间目标示意图,从而能够在无形环境中成功地进行概括化。具体地说,GEM代表一种空间目标规格,以奖励函数为条件,显示物体之间的重要空间关系,以及图中显示相应关系中每个边缘的状态等值映射。GEM将反向强化学习和积极奖赏学习结合起来,以便通过对等映射的图形结构和域随机化来有效改进奖励功能。我们用模拟的或触角和人类主题进行了实验。结果表明,GEM能够大大改进所学的目标示意图在强基线之上的通用性。